Hello everyone,
I would like to develop a flow via pyspark-notebooks in Microsoft Fabric.
More specifically, I want to extract tables from SAP and store the data in a Lakehouse after a few transformations.
I need an on-premise data gateway for the connection.
Furthermore, I would like to avoid the following if possible: SAP->Lakehouse->Notebooks->Lakehouse
Does anybody know if I can use the on-premise data gateway directly in Notebooks so that I don't have to stage the data anywhere?
Or do I have to use DataPipeline Copyjobs or Dataflows when it comes to using the DataGateway?