Google’s Answer to ChatGPT is Bard, But Will AI Chatbots Pose Cybersecurity Threat, Increase Crimes?

The name ‘ChatGPT’ has been trending across social media since November and has become the fastest growing consumer application to reach 100 million users. But now Google has confirmed that it is working on its AI-powered chatbot ‘Bard’.

Bard is based on Google’s language model LaMDA which will compete with OpenAI’s ChatGPT. It has been reported that the responses of LaMDA were so similar to humans that it might be sensitized.

Alphabet CEO Sundar Pichai announced the launch of Bard, an AI chatbot, in a blog post on February 6. He described the tool as an “experimental conversational AI service” that would answer users’ questions and participate in conversations.

Additionally, it was stated that a select group of “trusted testers” would have access to the program before it was made available to the public in the coming weeks.

‘Technology can be a useful servant’

ChatGPT is considered to be the best AI chatbot ever made which, within a few months, has become a worldwide phenomenon, with some experts saying that the idea was to create an AI that equals or even exceeds human intelligence.

While Microsoft CEO Satya Nadella recently praised AI chatbots, citing how it could help an Indian farmer access a government program, some experts are concerned about the impact of such technology. .

Professor V Ramgopal Rao, former director of IIT Delhi, called the phenomenon “remaining relevant in the post-Chat-GPT era”. In a note to his students, Rao said, “technology can be a useful servant but a dangerous master”.

However, given Google’s dominance of the tech market, Bard may reduce ChatGPT’s popularity even after Microsoft’s billion-dollar investment in OpenAI.

It should be noted here that despite having extensive knowledge about the type of AI underpinning ChatGPT, Google has so far been more careful about making its tools available to the general public.

Why AI chatbots could be a concern

While ChatGPT has become popular among techies, experts have drawn attention to the toxicity and illegal activities that AI chatbots can lead to.

A research report by cybersecurity provider Check Point revealed that they noticed attempts by Russian cybercriminals to circumvent OpenAI’s restrictions to use ChatGPT for malicious purposes, while another research noted How hackers are potentially using ChatGPT and codecs to execute targeted and efficient cyber attacks.

Meanwhile, another report states that a person interacted with ChatGPT regarding drug smuggling in Europe and received information from the inputs provided by the chatbot.

However, experts previously highlighted that LaMDA and GPT-3.5, which runs ChatGPT, have a well-documented tendency to spread harmful content, such as hate speech, and confidently create misinformation.

While ChatGPT’s makers acknowledge that it has flaws and that the algorithm often produces plausible-sounding but incorrect or illogical answers, Google is clearly trying to change the approach.

Pichai said in the blog post that Google will “combine external feedback with our own internal testing” to ensure Bard’s feedback meets high standards of quality, security and grounding in real-world information. But it is possible that the system will make mistakes, some of which may be serious.

He also said: “Bard can be an outlet for creativity, and a launchpad for curiosity, helping you interpret new discoveries from NASA’s James Webb Space Telescope to a 9 year old, Or learn more about the best strikers in soccer, and then get practice to build your skills.”

Additionally, Bard apparently has the ability to answer inquiries about recent events, something ChatGPT struggles with, using information from the web to produce new, high-quality answers.

Even though there are concerns regarding the use of ChatGPT in illegal activities, there is no doubt that people are fascinated with the tool and its massive popularity may have prompted Google to make the initial announcement at this time.

read all Latest Tech News Here