Privacy and security behind Hey ChatGPT create a picture of me.

3 min readApr 25, 2025

Hey ChatGPT create a picture of myself.

Wooh ChatGPT creates a picture of things surrounding your interest and liking. This makes people fascinating. But nobody knows the data process behind this and how the AI was trained to do such things.

Day to day, all the AI competitors develop their AI models with cutting-edge technologies. For example, ChatGPT recently released GPT o3 and o4-mini. Google released Gemini 2.5, likewise Anthropic and Grok released their most capable AI models. Did anyone think about their privacy by using these AI models? Let’s break them down.

OpenAI and other companies have their own data and privacy protection policies. Some chatbots already enable encryption for their chats to protect them from third-party access. The catch is, it’s not end-to-end encrypted (E2EE). Companies can still use your data to train their models.

Most of these companies stated that they use user input data to train their models, but it won’t be used as individual data; it will be used to train their models as bulk data. So, the chances of getting one’s data pushed to the AI model are very unlikely to happen.

There are some general risks to consider when it comes to generative AI. The users are still unclear about what personal information, user behaviour, and analytics are being recorded and retained, or shared with third parties. These models can be susceptible to adversarial prompt engineering, where malicious actors manipulate input to generate harmful or misleading content. Generative AI systems have the potential to generate false, misleading, or inaccurate content. These are the common risks that can be caused by these AI models.

How to use these modules safely and securely.

Since most of the AI has its own memory power to remember things. What we are saying and tracking our behaviours. So, if you don’t want it to remember things, you can always use these chatbots without logging in to an account, so it goes as a temporary chat. Also, you have the option to switch off the memory, so it won’t remember or store our personal data. This makes sure the chatbot cannot track our behaviour. But in my opinion, it’s still considered a privacy issue. So, the better way to use it is not to log in to the chatbot and not to share too much personal information.

The next major issue is how an organisation use these chatbots without the fear of data being leaked? For organisations using Microsoft 365 and the data encrypted by Microsoft’s purview, they can use Copilot as their generative AI. These organisations’ data will remain within their organisation’s tenant, and it will not be used as publicly trainable data. Also, the advantage is that users can use Copilot across the Office 365 apps to work productively.

Click this link to view the security of Microsoft 365 Copilot

As a summary, we cannot ensure the privacy of any data which is fed to the internet or these generative AIs. In my point of view, anything we share on the internet is not considered secure or protected. Even though these companies guarantee to protect our privacy and data, we don’t know what is happening behind it and how our data is stored or used. As a best practice, it is good not totrust any of these products. Only share your data which you think is okay if this data is publicly available. Other than that, don’t share any of your sensitive data.

--

--

Abith Ahamed
Abith Ahamed

Written by Abith Ahamed

Passionate about technology, networking, and cybersecurity. Network Engineer| Cybersecurity Specialist | Constantly exploring the ever-evolving tech landscape.

No responses yet