Activity
Mon
Wed
Fri
Sun
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
What is this?
Less
More

Memberships

Fabric AI Workflows

24 members • $15/month

Fabric Dojo 织物

349 members • $67/month

21 contributions to Learn Microsoft Fabric
How are your data engineers positioned within the business?
Quick question for the group: How are your data engineers positioned within the business? In our proposed model, IT is responsible for extracting and landing source data into Bronze (raw/delta) in a Fabric Lakehouse, while data engineers sit closer to the business and own Silver and Gold transformations, including semantic/analytical modelling. Keen to hear how others have structured this in practice and what’s worked (or not).
0 likes • 1d
This is a very fluid situation, it seems they realise things might take time to setup. And indeed fabric might be a great interim solution, but I have to meet them half way. So how can I utilise adls gen2 as my bronze storage, getting data from my on prem to cloud, one it’s in azure it’s plain sailing for me in fabric.
Data Transformation
Hi everyone, We’re currently running a long-term evaluation of Microsoft Fabric in an organisation that has historically been an IBM stack for 20+ years (DataStage, Cognos, on-prem Oracle, etc.). As you’d expect, there are differing opinions internally on the best direction forward — some of our newer managers come from AWS or Snowflake environments, while others prefer to stay close to our IBM lineage. My question to the community is around the transformation layer inside Fabric: What transformation tools are you actually using in production (or serious pilots) with Fabric — and why? Fabric gives us several options (T-SQL in Warehouse/Lakehouse, PySpark notebooks, Dataflows Gen2, and potentially dbt). But compared to something like IBM DataStage, Fabric’s GUI-driven transformation story is still evolving. Before we commit to a direction, I’m keen to understand from real-world users: - Are you doing most of your transformation work inside Fabric itself?(e.g., Data Pipelines + Dataflow Gen2 + PySpark + T-SQL) - Or are you keeping / adopting external transformation engines such as dbt Cloud, Databricks, Fivetran/Matillion/ADF, or even continuing with legacy ETL tools? - How have you balanced capability vs cost?Adding external tools clearly introduces new spend, but Fabric alone may not yet match the maturity of platforms like DataStage. - If you transitioned from GUI-based ETL tools (DataStage, Informatica, SSIS), what does your transformation architecture look like now? - Anything you wish you knew before choosing your path? Any insights, lessons learned, or architectural examples would be hugely appreciated. Thanks in advance!
0
0
Oracle on Prem to LH (NUMBER)
Seeking Advice on Handling Oracle NUMBER Data Type in Fabric I am trying to parameterise the loading of tables from an on-prem Oracle database into a Lakehouse in Microsoft Fabric. However, I have encountered a known issue: Copy activity from Oracle to Lakehouse fails due to NUMBER type. This issue prevents me from automating the process using Will's parameter-driven parent-child loop. My approach involves populating a configuration table with table names and other metadata that need to be loaded. The loop then processes each table from this configuration. The problem arises because some of the source tables uses the NUMBER data type with no precision, which causes the load to fail with the error:"Invalid Decimal Precision or Scale. Precision: 38, Scale: 127." As a result, I cannot automatically load such tables into the Lakehouse. Has anyone encountered this issue? If so, what workarounds or solutions have you implemented to overcome it? UPDATED: I am currently automating this, for each table, bring in as a csv then copying to a LH table, but by doing so all datatypes become varchar.
0
0
Request for a help to test Microsoft Fabric with a sample data.
Request for a help to test Microsoft Fabric with a sample data as a part of case study. Great if someone can help me with this activity. Thanks in advance.
2 likes • Oct '24
If you want different sample data,@Will Needham posted this in one of his videos some time back. Not directly related to MS but a good resource for sample data. https://public.tableau.com/app/learn/sample-data though i'm not sure this is what you're really asking ?
Pipeline DB2
I have sucessfully created a connection to my DB2 on pream database but cannot retrieve a list of tables. I get the following error, has anyone come across this before ?? Error thrown from driver. Sql code: '-805' The package corresponding to an SQL statement execution request was not found. SQLSTATE=51002 SQLCODE=-805 The previous consultant got this, and due to time constraints ended up using ADF instead of fabric to extract on-prem data. NOTE: I'm using Data Gateway, which is supposed to have an in-build DB2 connector
1-10 of 21
Chris Adams
3
43points to level up
@chris-adams-4127
Transitioning from IBM to MS, looking to learn from yourself and Others inparticular the DP-600 couse an exam.

Active 10h ago
Joined May 3, 2024
Perth - Western Australia
Powered by