‘Chatbot’-her: Bad Data to Propaganda & Cybersecurity, Experts Throw Light on Dark Side

Months after the launch of the highly popular ChatGPT, tech experts are flagging issues associated with chatbots such as snooping and misleading data.

ChatGPT developed by Microsoft-backed OpenAI has become a helpful artificial intelligence (AI) tool as people are using it to compose letters and poems. But those who looked at it very closely found many mistakes, which also raised doubts about its implementation.

Read this also | How to use ChatGPT: A step-by-step guide to using OpenAI’s human-like language model

Reports also suggest that it has the potential to pick up on the prejudices of those who are training it and produce offensive content that may be sexist, racist or otherwise.

For example, Union Minister of State for Electronics and Information Technology Rajeev Chandrasekhar shared a tweet that said: “Microsoft’s AI chatbot told a reporter it wanted to be ‘free’ and spread propaganda and misinformation.” It also urged the reporter to leave his wife.

However, when it comes to China’s plans for the AI ​​chatbot race, major companies like Baidu and Alibaba have started the process. But as far as the anti-partisan AI chatbot is concerned, it is believed that the CCP government will not disappoint as Beijing is famous for its censorship and propaganda practices.

bad data

As many people are obsessing over such chatbots, they are missing the basic threat issues associated with such technologies. For example, experts agree with the fact that chatbots can be poisoned by misinformation that can create a confusing data environment.

Priya Ranjan Panigrahi, founder and CEO of Septace, told News18, “Not only a confusing data system, but how the models are used together, especially in applications like natural language processing, chatbots and other AI-powered systems, are affected.” May be.”

Major Vineet Kumar, founder and global president of Cyberpeace Foundation, believes that the quality of the data used to train AI models is critical and bad data can lead to biased, inaccurate or inappropriate responses.

He suggested that the creators of these chatbots should create a strong and robust policy framework to prevent any misuse of the technology.

Read this also | Velocity launches India’s first ChatGPT-powered AI chatbot ‘Lexi’

Kumar added: “To mitigate these risks, it is important for AI developers and researchers to carefully curate and evaluate the data used to train AI systems and to monitor the outputs of these systems for accuracy and bias and Testing is important.”

According to him, it is also important for governments, organizations and individuals to be aware of such risks and to hold AI developers accountable for responsible development and deployment of AI systems.

safety issues

News18 asked tech experts whether it would be safe to sign in to these AI chatbots considering the cyber security issues and potential for snooping.

Srikant Bhalerao, founder and CEO of Seracal, said: “Whether it is a chatbot or not, we should always think before sharing any personal information on the internet or logging into any system, however, yes, we should use chatbots such as AI -Extra caution should be taken with powered interfaces.” Because they can access massive amounts of data.”

Additionally, he added that no system or platform is completely safe from hacking or data breaches. So even if the chatbot is designed with strong security measures, it is still possible that your information could be compromised if the system is breached, the expert said.

Meanwhile, Panigrahi, CEO of Septace, said that some chatbots may be designed with strong security and privacy safeguards, while others may be designed with weaker security measures or even with the intent to collect and exploit user data. Can also be designed.

He added: “It is important to check the privacy policies and terms of service of any chatbot you use. These policies cover the types of data that is collected, how that data is used and stored, and how Must outline how it may be shared with third parties.

Read this also | Five ChatGPT Extensions You Can Use on Chrome Browser

In this case, CPF founder Kumar said there are many concerns and potential threats to consider, including privacy and security, misinformation and propaganda, censorship and suppression of freedom of expression, competition and market dominance, as well as surveillance Is.

He added: “While there are potential concerns about the development and use of AI chatbots, the specific risks and benefits of each technology need to be considered on a case-by-case basis. Ultimately, the responsible development and deployment of AI technologies requires technical A combination of expertise, ethical considerations and regulatory oversight will be required.”

Additionally, Kumar said that “ethical AI” is important to ensure that AI systems, including chatbots, are used for the betterment of society and not harm.

read all Latest Tech News Here