I believe the software chatbot should inform the person that he/she is conversing with a bot, and give them the opportunity to continue or request a real person. I don't think it is ethical to allow the person to be fooled into thinking they are chatting with a real person when they aren't.
You raise a good point. It’s definitely important for chatbots to disclose that they’re automated and offer users the choice to speak with a real person. I hadn’t considered that aspect before, but I agree it’s essential for transparency and ethical interactions. Thanks for highlighting this!