hello. this is a greate content. However when I reproduce I encounter this bug When I click on and auth jupyter notebook in docker, I not seeing any notebook file, it is empty If I manual upload the code file, when I run it will have error : org.apache.spark.sql.AnalysisException: Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide".; The code is exec.ipynb from your drive folder. Please help me if can, I am very appreciate
the same problem, I asked chatgpt and renewed the first part of the code and was able to reproduce:os.environ['PYSPARK_SUBMIT_ARGS'] = \ '--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.5 pyspark-shell'