Welcome to "The iT Tech Solution," your go-to destination for insightful solutions in the world of IT development! 🚀
🔍 Dive deep into the intricate realm of coding, where we unravel the complexities and offer practical solutions to the challenges faced by IT professionals like you. From mastering Python's intricacies to harnessing the power of AWS, navigating Git, and optimizing your workflow with VSCode - we've got you covered.
🛠️ Explore step-by-step tutorials, troubleshooting guides, and best practices carefully crafted to elevate your development skills.
🌐 Stay updated on the latest trends, tools, and techniques shaping the IT landscape. Our content is designed to empower you with the knowledge and skills needed to excel in the dynamic world of software development.
🚀 Ready to level up your IT game? Subscribe to "The iT Tech Solution" and embark on a journey of continuous learning and innovation. Your coding conundrums meet their match here!
To know how to create a Free Tier Account please watch this video below: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-tV-tXpnLfMM.htmlsi=djx1ZM0ga7HQ2aDl
IMPORTANT NOTE: Also watch below video to complete two very important setup before starting the hands-on. This will protect you from unwanted costs by setting up Budget Alerts and also create an IAM user to protect your root account ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-SSfW23C1nJY.htmlsi=nxRF15A8BDwC17fX
Watch this to learn about realtime data processing using Snowflake Stream and Task ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-BhioEIOqe3c.htmlsi=bCoJ8qlNjyvMOTEL
Aslo watch this to know about Storage Integration, a better and recommended way to connect to AWS s3 as external stage ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-3gHiSZ5vyN8.html
But if you do this all your code will be in one container, now when ever any lambda function is triggered the image containing all the functionality will be loaded to memory unnecessarly because we need the functionality of only one lambda. imagine what the impact would be if i have more than 100 lambdas
Yes I agree, this model should be applied only if we have simple and less number of lambdas, for large and complex multiple lambdas we should always consider creating heirarchy of folders with separate dockers for each lambda. Thanks for bringing it up, really appreciate 👍
Hey there, sorry for a little late response. Yes, you can do that by capturing result form $ using RestulPath in first state and InputPath in second state, something like below: { "StartAt": "LambdaInvoke", "States": { "LambdaInvoke": { "Type": "Task", "Resource": "arn:aws:lambda:region:account-id:function:LambdaInvoke", "ResultPath": "$.lambdaResult", "Next": "LambdaInvoke1" }, "LambdaInvoke1": { "Type": "Task", "Resource": "arn:aws:lambda:region:account-id:function:LambdaInvoke1", "InputPath": "$.lambdaResult", "End": true } } } Try this way, it should work
thanks for the video, helpful. But in my case i put Retry [ErrorEquals] MaxAttempts = 3, but issue is that if some error occurs then step functions executing in loop. never ending still after 3 attempts. Please let me know how i should fix that, my all step function goes in infinity loop if some error comes.
I hope you are adding the catcher with correct error to catch and using a pass state as fallback. I will be able to check if I can see your step function code
Hello, in my case I need to write the spark df into Postgres. I’m currently using kubernetes so what would you suggest me on where to add the jdbc jar file? Confused with that
What I can suggest is to create a Dockerfile that includes the PostgreSQL JDBC driver JAR in the Spark image. Something like this: FROM bitnami/spark:latest # Or any other Spark base image you are using ADD postgresql-<version>.jar /opt/spark/jars/ And then build the Docker image and push it to your Docker registry. Then, use the Custom Image in your kubernetes job. Another option would be to include the JDBC driver JAR within your Spark code spark = SparkSession.builder \ .appName("YourAppName") \ .config("spark.jars", "path/to/postgresql-<version>.jar") \ .getOrCreate() df = spark.read.format("jdbc").option("url", "jdbc:postgresql://your-db-url").option("dbtable", "your-table").option("user", "your-username").option("password", "your-password").load()
i follow the all steps but i am getting the error ...Failure using stage area. Cause: [Access Denied (Status Code: 403; Error Code: AccessDenied)] like this ...
Apologies for responding late, 403 means your snowflake is not able to communicate with the aws. There is some issue in key. Try re-creating the stage again and give the check key in the IAM user that you would have created for Snowflake.
Hi, this setup spike my billing very high, The setup was to build lambda function to read the latest file from the s3 dir and make transformation then finally to s3 target dir, So this all setup with the python script has to run once the s3 notification to lambda function that an file just came to s3. But it went into a loop and made the s3 and lambda billing spike Let me knew what is the issue in my setup that i didn't noticed at first while running this python script in lambda
Were you writing the files back to same directory? You can have event notification on a specific key and write the processed file to a different key on which there is no event notification otherwise it will trigger the infinite loop. Be careful!
Hello Pradeep, that can be achieved using ResultPath and InputPath step function variables. Check this if this helps: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-9rzIhRb2qKk.html
Well, in that case you will have to read the file. Something as below: # Open and read the .txt file with open('env_variables.txt', 'r') as file: lines = file.readlines() # Parse each line and store the variables in a dictionary env_vars = {} for line in lines: key, value = line.strip().split('=') env_vars[key] = value # Now you can use the environment variables in your code api_key = env_vars.get('API_KEY') print("API Key:", api_key) But, keep in mind that storing sensitive information like passwords or API keys in plaintext files is not recommended for production use
Hi, No its not free. To activate or create account you don't need to pay anything as such, but you will be charged for each api call that your program will make. You pay for what you use. As per their website new users get free $5 worth of free tokens. after that it is charged @ $0.0010 / 1K tokens for inputs and $0.0020 / 1K tokens for outputs. Tokens can be thought of as pieces of words. Before the API processes the prompts, the input is broken down into tokens. 1 token ~= 4 chars in English 1 token ~= ¾ words 100 tokens ~= 75 words More about these can be found at: openai.com/pricing help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
Hi, To activate or create account you don't need to pay anything as such, but you will be charged for each api call that your program will make. You pay for what you use. As per their website new users get free $5 worth of free tokens. after that it is charged @ $0.0010 / 1K tokens for inputs and $0.0020 / 1K tokens for outputs. Tokens can be thought of as pieces of words. Before the API processes the prompts, the input is broken down into tokens. 1 token ~= 4 chars in English 1 token ~= ¾ words 100 tokens ~= 75 words More about these can be found at: openai.com/pricing help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them