Hello Bhawna. Regarding "partitions should be at least 1GB", it is not always as straightforward. If your use case is read-heavy, then large partitions make sense. For write-heavy use cases, smaller partitions work much better. Here is a reference video for this: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-o2k9PICWdx0.html
How to delete partition folders/directories (which contains parquet files). I could remove the reference of the particular date partition from delta log but the original date partition folders are not getting deleted. Tried Vacuum as well.
thanks Bhawna , I have use -case , i have two files i.e. s3 "delta" files , i need to get 1 first file and delete those records in second file i.e. without changing the file path , is it possible if so how it can be done ?
@SpiritOfIndiaaa When dealing with Delta files in an S3 bucket, it’s important to note that directly modifying the contents of a file in place (i.e., without changing the file path) is not possible. However, I can provide you with some alternative approaches: Local Modification and Upload: Download the second Delta file locally. Apply the necessary changes (deleting records) to the downloaded file. Upload the modified file back to the same S3 location, overwriting the original file. This approach ensures that the file path remains unchanged. Upsert Using Delta Lake (Databricks): If you have access to Databricks or a similar platform, you can use Delta Lake’s MERGE operation to upsert data from one Delta table into another. This method allows you to insert, update, or delete records in a target Delta table based on the contents of a source table or DataFrame1. Delta Lake with Databricks (Without Changing File Path): If you’re not using Databricks, modifying Delta files directly in S3 without changing the file path is challenging. You would need to follow the first approach (local modification) and then upload the modified file back to S3. Remember that directly modifying files in place (especially in distributed storage systems like S3) can be complex due to transactional guarantees and the distributed nature of the data. Always ensure data consistency and backup your files before making any changes. 😊