I'm getting error running the code since other IP distinct than spark server or local host, this could be the issue? I've publish the port on my server but I still get "An existing connection was forcibly closed by the remote host", any help?
Could you please make a video on running Pyspark jobs on Airflow 2.4.0 with bitnami Spark 3.3.1. I have been trying to run the pyspark jobs but getting "Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources" error though i can see worker node has enough memory and active in spark master UI.
try to expose 7077 port in the spark-master container so that it can be accessed publicly or privately. then those workers on the different server have it set the SPARK_MASTER_URL to the spark master IP:7077
Won't start, I'm using windows. spark-livy_1 | Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: SPARK_HOME path does not exist spark-livy_1 | at scala.Predef$.require(Predef.scala:224) spark-livy_1 | at org.apache.livy.utils.LivySparkUtils$.testSparkHome(LivySparkUtils.scala:56) spark-livy_1 | at org.apache.livy.server.LivyServer.start(LivyServer.scala:77) spark-livy_1 | at org.apache.livy.server.LivyServer$.main(LivyServer.scala:423) spark-livy_1 | at org.apache.livy.server.LivyServer.main(LivyServer.scala) spark_docker_spark-livy_1 exited with code 1 Stopping spark_docker_spark-worker_1 ... Stopping spark_docker_spark-master_1 ...
it would be good if we heard an answer, had the same problem as the guys, when pointing to the image tag it can't fint it, then when pointed to localhsot:7077 it says the worker has no resources.
@@priyankabaswa378 I'm facing a similar issue then I changed this spark.master to os.environ.get("SPARK_MASTER_URL", "spark://localhost:7077") , now I'm getting a different error "Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources" , even I removed the mem and core setting on the worker the error persists
hey, thanks, just 1 question if I may, I just started a docker from raw.githubusercontent.com/bitnami/bitnami-docker-spark/master/docker- compose.yml using docker-compose up... I have checked that de scala and spark versions are correct, I also copied your sparkSession configuration and set my master url. I am trying to run the program but getting: WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master f85ba43781d0:7077 Many thanks