Тёмный

AWS Glue PySpark: Flatten Nested Schema (JSON) 

DataEng Uncomplicated
Подписаться 19 тыс.
Просмотров 14 тыс.
0% 0

This is a technical tutorial on how to flatten or unnest JSON arrays into columns in AWS Glue with Pyspark. This video will walk through how to use the relationalize transform and how to join the dynamic frames together for further analysis or writing to another location.
The Script and Example: github.com/Adr...
Sample data: github.com/Adr...
#aws #awsglue

Опубликовано:

 

4 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 26   
@oggyoggyoggyy
@oggyoggyoggyy Год назад
This is a truly amazon channel helping people to understand and learn more about ETL and cloud computing. Thanks so much!
@DataEngUncomplicated
@DataEngUncomplicated Год назад
Wow! thanks for the super thanks James! I don't get many of these being a small channel so it is very much appreciated!
@oggyoggyoggyy
@oggyoggyoggyy Год назад
@@DataEngUncomplicated Don't know if you can include paypal link, so that you wouldn't have to pay the commissions to RU-vid.
@smrtysam
@smrtysam Год назад
Really good tutorial.
@DataEngUncomplicated
@DataEngUncomplicated Год назад
Thanks! I'm glad it was helpful.
@meghanayerramsetti8394
@meghanayerramsetti8394 7 месяцев назад
Hi I have a complete nested json file while I am running crawler on it , i am getting only one schema wtih column name array and data type array and in that array data type the column name and datatype are present is that correct
@DataEngUncomplicated
@DataEngUncomplicated 7 месяцев назад
Hi, do you want to elaborate on your problem? I don't understand the question.
@Learn_IT_with_Azizul
@Learn_IT_with_Azizul Год назад
Great 👍
@DataEngUncomplicated
@DataEngUncomplicated Год назад
Thank you! Cheers!
@offersononlineshopping7872
@offersononlineshopping7872 8 месяцев назад
I don't know the exact schema for the input table, would you have any dynamic way of approach for same scenario instead of hard coding the column names
@DataEngUncomplicated
@DataEngUncomplicated 4 месяца назад
Hi, sorry for the late reply, well you can create a python function to retrieve the column names from your dataframe. Once you have them, you can dynamically pass them to this funciton
@Streampax
@Streampax Год назад
Hi, Can we position or change the column order when transforming a json file while loading the metadata
@najwanabdulkareem4015
@najwanabdulkareem4015 Год назад
Hi ... this is great video, have a question: what happen if we dont have a common column to join the different dataset on? is there any work around it?
@DataEngUncomplicated
@DataEngUncomplicated Год назад
Hi Najwan, If you are using this method to unflatten there has to be a common field if I recall since it would create a one to many relationship. Unless you are asking about flattening a json with a different method?
@claytonvanderhaar3772
@claytonvanderhaar3772 Год назад
Hi your channel is really awesome and helpful, was wondering if it is possible to join on 2 different json files stored in separate s3 buckets
@andrewwatson6473
@andrewwatson6473 Год назад
I believe this shouldn’t be an issue. As long as you have the prerequisite permissions in place for both s3 buckets, I think you can just append the s3 URI to the paths array in connection_options
@claytonvanderhaar3772
@claytonvanderhaar3772 Год назад
@@andrewwatson6473 Great thanks man for the help I appreciate it
@DataEngUncomplicated
@DataEngUncomplicated Год назад
Hi Clayton thanks for your feedback! Yes this is totally possible! Once you read the data into a separate dataframes, you can use the join transform to join them.
@claytonvanderhaar3772
@claytonvanderhaar3772 Год назад
@@DataEngUncomplicated Hi thanks another issue I am having if I have to join data on a attribute that comes in multiple time with a different timestamp
@DataEngUncomplicated
@DataEngUncomplicated Год назад
@@claytonvanderhaar3772 Hi Clayton, this sounds like a data problem. I'm not sure if you want it to join on this timestamp or not but you could change your timestamp to a date from datetime.
@saad1732
@saad1732 Год назад
Amazing, this kind of scenarios are presently more often than not as a data engineer. Can we run python code within the same interactive session notebook? So for instance, we can run python code to pull data from API (using requests) or SQS destination queue in json, then relationalize with pyspark code?
@DataEngUncomplicated
@DataEngUncomplicated Год назад
Thanks! You could do that if your source is an SQS queue, but using pyspark is a complete over kill to process small datasets. I would just run your python code on a lambda function
@shrikantpandey6401
@shrikantpandey6401 Год назад
Hi please provide the dataset you used, it will be great
@DataEngUncomplicated
@DataEngUncomplicated Год назад
Sure Shrikant! I'll upload it to my GitHub repo in the morning which you can find in the description of the video.
@DataEngUncomplicated
@DataEngUncomplicated Год назад
Please see link for sample dataset: github.com/AdrianoNicolucci/dataenguncomplicated/blob/main/aws_glue/sample_data/customer_orders_with_addresses.json
@shrikantpandey6401
@shrikantpandey6401 Год назад
@@DataEngUncomplicated Thanks a lot for providing the dataset :)
Далее
AWS Glue PySpark: Unpivot Columns To Rows
4:37
Просмотров 2,4 тыс.
14 Read, Parse or Flatten JSON data
17:50
Просмотров 2,6 тыс.
La Tierra Robó El Anillo De Saturno #planetballs
00:14
Flatten Nested Json in PySpark
9:22
Просмотров 3 тыс.
How to create RDS Data Source in AWS Glue
7:16
Просмотров 4,7 тыс.
Solving one of PostgreSQL's biggest weaknesses.
17:12
Просмотров 193 тыс.