Artifical Intelligence News

AI News blog updated often.

Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans

Categories: Apple, Applications, Artificial Intelligence, Chatbots, Companies, Ethics & Society, Machine Learning, Privacy,

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)

The lawsuit sought remedies for “breach of contract, promissory estoppel, breach of fiduciary duty, unfair business practices, and accounting,” as well as specific performance, restitution, and damages.

The withdrawal of the lawsuit comes at a time when Musk is strongly opposing Apple’s plans to integrate ChatGPT into its operating systems.

During Apple’s keynote event announcing Apple Intelligence for iOS 18, iPadOS 18, and macOS Sequoia, Musk threatened to ban Apple devices from his companies, calling the integration “an unacceptable security violation.”

Despite assurances from Apple and OpenAI that user data would only be shared with explicit consent and that interactions would be secure, Musk questioned Apple’s ability to ensure data security, stating, “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”

Since bringing the lawsuit against OpenAI, Musk has also created his own AI company, xAI, and secured over $6 billion in funding for his plans to advance the Grok chatbot on his social network, X.

While Musk’s reasoning for dropping the OpenAI lawsuit remains unclear, his actions suggest a potential shift in focus towards advancing his own AI endeavours while continuing to vocalise his criticism of OpenAI through social media rather than the courts.

Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans Read More »

AI pioneers turn whistleblowers and demand safeguards

Categories: Artificial Intelligence, Companies, Ethics & Society, Legislation & Government,

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)

OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology.

In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came shortly after OpenAI unveiled its new flagship GPT-4o model, which it touted as “magical” at its Spring Update event.

According to reports, Leike’s departure was driven by constant disagreements over security measures, monitoring practices, and the prioritisation of flashy product releases over safety considerations.

Leike’s exit has opened a Pandora’s box for the AI firm. Former OpenAI board members have come forward with allegations of psychological abuse levelled against CEO Sam Altman and the company’s leadership.

The growing internal turmoil at OpenAI coincides with mounting external concerns about the potential risks posed by generative AI technology like the company’s own language models. Critics have warned about the imminent existential threat of advanced AI surpassing human capabilities, as well as more immediate risks like job displacement and the weaponisation of AI for misinformation and manipulation campaigns.

In response, a group of current and former employees from OpenAI, Anthropic, DeepMind, and other leading AI companies have penned an open letter addressing these risks.

“We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies,” the letter states.

“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.”

The letter, which has been signed by 13 employees and endorsed by AI pioneers Yoshua Bengio and Geoffrey Hinton, outlines four core demands aimed at protecting whistleblowers and fostering greater transparency and accountability around AI development:

That companies will not enforce non-disparagement clauses or retaliate against employees for raising risk-related concerns.

That companies will facilitate a verifiably anonymous process for employees to raise concerns to boards, regulators, and independent experts.

That companies will support a culture of open criticism and allow employees to publicly share risk-related concerns, with appropriate protection of trade secrets.

That companies will not retaliate against employees who share confidential risk-related information after other processes have failed.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” said Daniel Kokotajlo, a former OpenAI employee who left due to concerns over the company’s values and lack of responsibility.

The demands come amid reports that OpenAI has forced departing employees to sign non-disclosure agreements preventing them from criticising the company or risk losing their vested equity. OpenAI CEO Sam Altman admitted being “embarrassed” by the situation but claimed the company had never actually clawed back anyone’s vested equity.

As the AI revolution charges forward, the internal strife and whistleblower demands at OpenAI underscore the growing pains and unresolved ethical quandaries surrounding the technology.

AI pioneers turn whistleblowers and demand safeguards Read More »

DuckDuckGo releases portal giving private access to AI models

Categories: Applications, Chatbots, Companies, Ethics & Society, Privacy, Virtual Assistants,

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)

DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected.

The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku, while the open-source models are Meta’s Llama-3 70B and Mistral AI’s Mixtral 8x7b.

What sets DuckDuckGo AI Chat apart is its commitment to user privacy. Neither DuckDuckGo nor the chatbot providers can use user data to train their models, ensuring that interactions remain private and anonymous. DuckDuckGo also strips away metadata, such as server or IP addresses, so that queries appear to originate from the company itself rather than individual users.

The company has agreements in place with all model providers to ensure that any saved chats are completely deleted within 30 days, and that none of the chats made on the platform can be used to train or improve the models. This makes preserving privacy easier than changing the privacy settings for each service.

In an era where online services are increasingly hungry for user data, DuckDuckGo’s AI Chat service is a breath of fresh air. The company’s commitment to privacy is a direct response to the growing concerns about data collection and usage in the AI industry. By providing a private and anonymous platform for users to interact with AI chatbots, DuckDuckGo is setting a new standard for the industry.

DuckDuckGo’s AI service is free to use within a daily limit, and the company is considering launching a paid tier to reduce or eliminate these limits. The service is designed to be a complementary partner to its search engine, allowing users to switch between search and AI chat for a more comprehensive search experience.

“We view AI Chat and search as two different but powerful tools to help you find what you’re looking for — especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points.” the company explained.

“If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw.”

To accommodate that user workflow, DuckDuckGo has made AI Chat accessible through DuckDuckGo Private Search for quick access.

The launch of DuckDuckGo AI Chat comes at a time when the AI industry is facing increasing scrutiny over data privacy and usage. The service is a welcome addition for privacy-conscious individuals, joining the recent launch of Venice AI by crypto entrepreneur Erik Voorhees. Venice AI features an uncensored AI chatbot and image generator that doesn’t require accounts and doesn’t retain data..

As the AI industry continues to evolve, it’s clear that privacy will remain a top concern for users. With the launch of DuckDuckGo AI Chat, the company is taking a significant step towards providing users with a private and secure platform for interacting with AI chatbots.

DuckDuckGo releases portal giving private access to AI models Read More »

AI in casino games: A whole new world waiting to be dealt

Categories: Applications, Artificial Intelligence, Entertainment & Retail, Gaming, Industries,

Adam Walker is an experienced writer around the AI industry.

AI is in pretty much everyone’s conversations right now, with people using it (successfully and unsuccessfully) for a vast range of different things. Let’s face it: we’ve got stars in our eyes when it comes to AI right now — but what’s it doing to one of the vast industries on our planet, the casino industry? How is it shaking up games from the core? Let’s find out!

Many games are being totally revolutionised by AI stepping onto the scene, so let’s get into the nitty-gritty of which games are changing, what’s happening, and how AI is leaving its footprint on this world of online casino games!

First up: personalisation. AI really shines when it comes to personalising the slots, because an AI can analyse each player’s individual behaviour and start tailoring what the game shows to match. Imagine you’re playing at your favourite slot, and you get a bunch of free spins come up — but none of them are quite what you wanted and they’re just not doing it for you today. We all know that feeling of disappointment and honestly, it gets directed at the company, because why don’t they know you better than that? Isn’t marketing meant to be good these days?

Well, AI is changing all that and cutting the frustration that comes with it! It is capable of tracking what bonuses you use and what games you play (and even when and how you play them), and that means that suddenly, casinos can offer much more tailored options when you’re playing on the slots. Free spins for your favourite game ever, just as you sit down to relax on a Friday night? Yes, that’s much more likely now!

Personalised bonus games? These are also creeping onto the scene, along with game features that are specifically honed to tick your “like” box and give you the best possible gaming experience. And it’s only because of AI that this is becoming possible — sure, casinos tried to offer this kind of personalisation in the past, but it was simply too much for humans to manage.

How do you teach a computer to bluff? We’re not going to pretend it’s easy; it’s proven a major challenge for those building AIs, getting a computer to mimic a human’s ability to deceive other players. However, we’re pretty much there, and AIs can now be incorporated into the online world of poker — one of the most popular casino games on the planet.

So, first off, they have created an AI that’s good at poker; there’s been major progress in advancing how the absolute best AI can play, and it’s doing well. However, that’s not actually enough for casinos: they don’t want an AI that can beat human players every time, because who would ever play against that? They need an AI that can understand nuance, make mistakes occasionally, and lose — but in convincing ways that are still satisfying to play against. Now that’s a real challenge!

But if they’re successful, there’s going to be big rewards: some people would much rather play against a computer than other humans, provided the computer makes a satisfying opponent. This is likely to be an ongoing process as the AIs master how to play in each context, but it already looks promising to us! Of course, there are wary about teaching computers how to lie effectively after all, sci-fi books and films have shown us exactly why that could be a bad idea. For the casino industry, though, it’s looking tantalising.

AI isn’t “big” in most casino games yet, because it hasn’t had time to infiltrate them but we’re likely to see it edging in from the fringes and changing more and more things about how we play and enjoy games online as the years come. It’s overly exciting to imagine how it might revolutionise classic games like poker, blackjack, roulette, the slots, and more. However, we’re just going to have to “wait and see” here, because AI is only just unfolding its metaphorical wings and starting to flap.

AI in casino games: A whole new world waiting to be dealt Read More »

NVIDIA presents latest advancements in visual AI

Categories: Applications, Artificial Intelligence, Companies, Development, NVIDIA,

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)

NVIDIA researchers are presenting new visual generative AI models and techniques at the Computer Vision and Pattern Recognition (CVPR) conference this week in Seattle. The advancements span areas like custom image generation, 3D scene editing, visual language understanding, and autonomous vehicle perception.

“Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement,” said Jan Kautz, VP of learning and perception research at NVIDIA.

“At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.”

Among the over 50 NVIDIA research projects being presented, two papers have been selected as finalists for CVPR’s Best Paper Awards — one exploring the training dynamics of diffusion models and another on high-definition maps for self-driving cars.

Additionally, NVIDIA has won the CVPR Autonomous Grand Challenge’s End-to-End Driving at Scale track, outperforming over 450 entries globally. This milestone demonstrates NVIDIA’s pioneering work in using generative AI for comprehensive self-driving vehicle models, also earning an Innovation Award from CVPR.

One of the headlining research projects is JeDi, a new technique that allows creators to rapidly customise diffusion models — the leading approach for text-to-image generation — to depict specific objects or characters using just a few reference images, rather than the time-intensive process of fine-tuning on custom datasets.

Another breakthrough is FoundationPose, a new foundation model that can instantly understand and track the 3D pose of objects in videos without per-object training. It set a new performance record and could unlock new AR and robotics applications.

NVIDIA researchers also introduced NeRFDeformer, a method to edit the 3D scene captured by a Neural Radiance Field (NeRF) using a single 2D snapshot, rather than having to manually reanimate changes or recreate the NeRF entirely. This could streamline 3D scene editing for graphics, robotics, and digital twin applications.

On the visual language front, NVIDIA collaborated with MIT to develop VILA, a new family of vision language models that achieve state-of-the-art performance in understanding images, videos, and text. With enhanced reasoning capabilities, VILA can even comprehend internet memes by combining visual and linguistic understanding.

NVIDIA’s visual AI research spans numerous industries, including over a dozen papers exploring novel approaches for autonomous vehicle perception, mapping, and planning. Sanja Fidler, VP of NVIDIA’s AI Research team, is presenting on the potential of vision language models for self-driving cars.

The breadth of NVIDIA’s CVPR research exemplifies how generative AI could empower creators, accelerate automation in manufacturing and healthcare, while propelling autonomy and robotics forward.

NVIDIA presents latest advancements in visual AI Read More »

Apple is reportedly getting free ChatGPT access

Categories: Apple, Applications, Artificial Intelligence, Chatbots, Companies, Virtual Assistants,

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)

Apple’s newly-announced partnership with OpenAI — which brings ChatGPT capabilities to iOS 18, iPadOS 18, and macOS Sequoia — comes without any direct money exchange.

Instead, the Cupertino-based company is leveraging its massive user base and device ecosystem as currency.

“Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments,” Gurman’s sources explained.

NEW: Apple and OpenAI have been silent on the financial terms of their ChatGPT deal. But here are the details: Apple and OpenAI aren’t paying each other and instead there’s money to be made later on revenue sharing deals. More here: https://t.co/MGTdWeJsyG

Gurman notes that OpenAI could find a silver lining by encouraging Apple users to subscribe to ChatGPT Plus, priced at $20 per month. If subscribers sign up through Apple devices, the iPhone maker will likely even claim a commission.

Apple’s AI strategy extends beyond OpenAI. The company is reportedly in talks to offer Google’s Gemini chatbot as an additional option later this year, signalling its intent to provide users with diverse AI experiences without necessarily having to make such major investments itself.

The long-term vision for Apple involves capturing a slice of the revenue generated from monetising chatbot results on its operating systems. This move anticipates a shift in user behaviour, with more people relying on AI assistants rather than traditional search engines like Google.

While Apple’s AI plans are ambitious, challenges remain. The report highlights that the company has yet to secure a deal with a local Chinese provider for chatbot features, though discussions with local firms like Baidu and Alibaba are underway. Initially, Apple Intelligence will be limited to US English, with expanded language support planned for the following year.

The Apple-OpenAI deal represents a novel approach to collaboration in the AI space, where brand exposure and technological integration are valued as much as, if not more than, direct financial compensation.

Apple is reportedly getting free ChatGPT access Read More »

TickLab: Revolutionising finance with AI-powered quant hedge fund and E.D.I.T.H. – AI News

TickLab, founded by visionary CTO Yasir Albayati, is at the forefront of innovation in the financial sector, specialising in deploying advanced decentralised AI into finance. Our company operates as a quantitative hedge fund, focusing on crypto, stocks, and forex markets. With the launch of our cutting-edge Quantitative Decentralised AI Hedge Fund, we offer investors the… Read more »

TickLab: Revolutionising finance with AI-powered quant hedge fund and E.D.I.T.H. – AI News Read More »

Scroll to Top