You think you can replace them with ai at first, but then wait till you have an audit. AI would give everything away and/or make small hard to detect mistakes, putting you at a likely worse scenario.
@@finxter will make it short you can iterate and build on it for better outcomes; remember GPT models are pattern seekers; when you prompt on GPT models, there is a need to first set the stage; similar to giving a presentation to a room full of folks who want to understand context: explain exactly what the docs or images or texts are, and where the text or information comes from; this helps the model to set boundaries. Then go ahead and set the actual boundaries for the model: explain to the model why something should be done or not done; this is key for the model to respond to your specific queries you input as a response. Here you also need to factor in model hallucinating which is increasingly common; so query in the desired response closely to the task you want as a outcome by giving instructions to the model to function set - the response is closer to your required outcome or the set sufficiency to 'not closer' to the outcome. Also in prompting the model you should try to use personas for it to identify an ideal response parameter ( you can be experience finance director or CFO giving the task to a finance intern etc. to create a more efficient response) much of it you will discover through trial and error method based on your prompt requirements what iterative counter prompts give you best results based on your requirements. Also RAG is not auto applied to all prompts on closed GPT models: if you are using APIs you need to set RAG parameters for best results. Goodluck with your work :)