Accessing data via gateway in notebooks
Hello everyone,
I would like to develop a flow via pyspark-notebooks in Microsoft Fabric.
More specifically, I want to extract tables from SAP and store the data in a Lakehouse after a few transformations.
I need an on-premise data gateway for the connection.
Furthermore, I would like to avoid the following if possible: SAP->Lakehouse->Notebooks->Lakehouse
Does anybody know if I can use the on-premise data gateway directly in Notebooks so that I don't have to stage the data anywhere?
Or do I have to use DataPipeline Copyjobs or Dataflows when it comes to using the DataGateway?
1
2 comments
Mats Ka
2
Accessing data via gateway in notebooks
Learn Microsoft Fabric
skool.com/microsoft-fabric
Helping passionate analysts, data engineers, data scientists (& more) to advance their careers on the Microsoft Fabric platform.
Leaderboard (30-day)
Powered by