Apple ‘Restricts’ Internal Use Of ChatGPT, GitHub’s Copilot Over Data Leak Risk

New Delhi: Apple has reportedly restricted internal use of AI chatbots ChatGPT and GitHub’s Copilot over concerns that its confidential data could end up with developers who trained AI models on user data. According to The Wall Street Journal, the iPhone maker “may release concerned employee confidential data as it develops its own similar technology”.

Apple has restricted the use of ChatGPT and other external AI tools for some employees “as it develops its own similar technology,” according to a document reviewed by the WSJ. ,Also Read: 47% Americans Use ChatGPT For Stock Picks: Study,

The tech giant is developing its own generative AI models, but did not elaborate on what they might be used for, according to the report. In March, The New York Times reported that several Apple teams, including the team working on Siri, are experimenting with language-generating AI. ,ALSO READ: Apple to remove 1,474 apps in 2022 on government request, 14 from India,

ChatGPT has reportedly been on Apple’s list of restricted software for months.

Samsung has also reportedly blocked the use of generative AI tools like ChatGPT on company-owned devices as well as non-company-owned devices running on internal networks.

The South Korean giant is said to be developing its own in-house AI tools for “software development and translation”. The decision comes after Samsung’s sensitive data was accidentally leaked on ChatGPT last month.

Organizations such as Bank of America, Citi, Deutsche Bank, Goldman Sachs, Wells Fargo, JPMorgan, Walmart and Verizon have also restricted their employees from accessing ChatGPT.