Would be interesting to have some commentary on how Spark pools and sessions are spun up to support Notebook activities in a pipeline. Are all activities run on the same pool in the same session, or is a new session started for each activity? Is there any overhead in calling a Notebook from another Notebook?