Artifical Intelligence News

AI News blog updated often.

SoftBank chief: Forget AGI, ASI will be here within 10 years

SoftBank founder and CEO Masayoshi Son has claimed that artificial super intelligence (ASI) could be a reality within the next decade.

Speaking at SoftBank’s annual meeting in Tokyo on June 21, Son painted a picture of a future where AI far surpasses human intelligence, potentially revolutionising life as we know it. Son asserted that by 2030, AI could be “one to 10 times smarter than humans,” and by 2035, it might reach a staggering “10,000 times smarter” than human intelligence.

SoftBank’s CEO made a clear distinction between artificial general intelligence (AGI) and ASI. According to Son, AGI would be equivalent to a human “genius,” potentially up to 10 times more capable than an average person. ASI, however, would be in a league of its own, with capabilities 10,000 times beyond human potential.

Son’s predictions align with the goals of Safe Superintelligence Inc. (SSI), founded by Ilya Sutskever, former chief scientist at OpenAI, along with Daniel Levy and Daniel Gross. SSI’s mission, as stated on their website, is to “approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.”

The timing of these announcements underscores the growing focus on superintelligent AI within the tech industry. While SoftBank appears to be prioritising the development of ASI, SSI is emphasising the importance of safety in this pursuit. As stated by SSI’s founders, “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”

It’s worth noting that the scientific community has yet to reach a consensus on the feasibility or capabilities of AGI or ASI. Current AI systems, while impressive in specific domains, are still far from achieving human-level reasoning across all areas.

Son’s speech took an unexpectedly personal turn when he linked the development of ASI to his own sense of purpose and mortality. “SoftBank was founded for what purpose? For what purpose was Masayoshi Son born? It may sound strange, but I think I was born to realise ASI. I am super serious about it,” he declared.

Son’s predictions and SoftBank’s apparent pivot towards ASI development, coupled with the formation of SSI, raise important questions about the future of AI and its potential impact on society. While the promise of superintelligent AI is enticing, it also brings concerns about job displacement, ethical considerations, and the potential risks associated with creating an intelligence that far surpasses our own.

Whether Son’s vision of ASI within a decade proves prescient or overly optimistic remains to be seen, but one thing is certain: the race towards superintelligent AI is heating up, with major players positioning themselves at the forefront.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

SoftBank chief: Forget AGI, ASI will be here within 10 years Read More »

Apple’s EU AI delay: Innovation vs regulation

Apple’s EU AI delay: Innovation vs regulation

About the Author

By Dashveenjit Kaur |


Dashveenjit is an experienced tech and business journalist with a determination to find and produce stories for online and print daily. She is also an experienced parliament reporter with occasional pursuits in the lifestyle and art industries.


Apple announced on Friday that it would block its highly anticipated Apple Intelligence AI features, iPhone Mirroring, and SharePlay Screen Sharing for EU users. While not entirely unexpected, this decision underscores the growing tension between rapid technological advancement and the EU’s stringent regulatory framework, particularly the Digital Markets Act (DMA) and General Data Protection Regulation (GDPR).

From the EU’s perspective, this delay represents both a triumph and a challenge. It demonstrates the effectiveness of regulations safeguarding user privacy and promoting fair competition. The DMA and GDPR have forced tech giants to pause and reconsider their approaches, potentially leading to more user-centric and privacy-conscious products. However, this victory comes with a price: the risk of falling behind in the global AI race. 

As other regions forge ahead with less restrictive policies, the EU must carefully balance its regulatory stance with the need to foster innovation and maintain competitiveness in the global tech landscape. For Apple, this delay is likely a calculated move. The company backs the decision by citing security and privacy reasons, which helps keep up its brand profile as a reputed tech giant that cares about privacy. 

All in all, this could preserve user faith while giving Apple more time to adjust how its AI functions to be likewise compatible with EU law. But it also introduces competition and raises the risk that Apple will cede potential ground to competitors who might manage to navigate the regulatory environment faster. Nevertheless, postponing AI offerings of other tech behemoths such as Meta and Google in the EU also indicates a broader industry-wide challenge. 

Many of those companies say they need large, trained AI systems to work correctly but claim that GDPR restrictions drastically limit what they can do in practice. That begs the question: Can advanced  AI technology coexist with some of the world’s strictest data protection regulations?

Apple’s AI product would most certainly receive scrutiny compared to its competitors. The core difficulty is the data-hungry nature of modern AI systems. To provide personalised and effective services, these AIs require access to enormous datasets, which may violate GDPR principles such as data minimisation and purpose limitation.

However, Apple could have an advantage in this area. Its emphasis on on-device processing and differential privacy approaches may enable it to develop AI features more compliant with EU standards. If successful, this might establish a new norm for privacy-preserving AI, providing Apple an advantage in the European market.

And it’s not Apple’s first encounter with EU regulation. In September 2021, the company complained about parts of the DMA rules that would have forced it to allow users to sideload apps from its App Store for the first time. Apple claimed that doing so would jeopardise user privacy and security, reinforcing its long-standing belief in the sanctity of its closed ecosystem.

Furthermore, Apple’s recent move to prohibit progressive web applications (PWAs) in the EU has caused developer objections. Many saw this decision as yet another attempt to resist regulatory pressure. However, in an unexpected turn of events, the EU concluded that Apple’s treatment of PWAs did not breach DMA guidelines, prompting the company to reconsider its decision.

Global implications: Fragmentation or harmonisation?

These incidents shed light on the intricate relationship between tech companies and regulators. Companies like Apple are known for resisting regulations they perceive as too strict. However, they must also be ready to adjust their strategies when their understanding of the rules is questioned.

The EU delay of Apple’s AI features is more than a bump in the road. It illustrates the complex relationship between legal and technological innovation. Finding that balance will be vital as we go forward. Regulators and the tech industry will both need to adapt to build a world where high-powered AI is allowed to operate while also respecting human rights and privacy.

It is a reminder that there are no clear courses to follow in the constantly changing world of AI. Governments, in turn, will need to be ready for fresh thinking and creative formulation if we want the powers of AI brought to the good in ways that are true to the values and rights on which our digital society rests.

However, the timing of the controversy raises questions about the future of global tech development. Will the digital landscape continue to fragment, with different functionalities available in other geographies based on what is permissible by that jurisdiction’s regulations? Or is it the direction of a more harmonised global approach to tech regulation and development?

As consumers, we find ourselves in a constant struggle between the forces of innovation and regulation. As technology advances, we are eager to embrace the newest AI-powered features that enhance our digital experiences and cater to our individual needs. However, it is equally important to us to prioritise protecting our privacy and data. 

Companies such as Apple face the challenge of pushing the boundaries of what is possible with AI and establishing new benchmarks for privacy and security. To sum up, Apple’s decision to delay its AI features in the EU is a major story in the continuing discussion of tech innovation and regulation. It highlights the need for a more sophisticated and collaborative strategy to form our digital future. 

As we go down this path, it will be all the more important to have open and constructive conversations with all stakeholders—tech firms, regulators, users—to come up with solutions that promote innovation while safeguarding basic rights. Indeed, the future of AI fundamentally in Europe and on a global scale might be at stake as we struggle through these stormy seas.

(Image Credit: Apple)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

View Comments
Leave a comment

Apple’s EU AI delay: Innovation vs regulation Read More »

ChatGPT Prompt Generator: Unleashing the power of AI conversations

Categories: Artificial Intelligence,

Duncan is an award-winning editor with more than 20 years experience in journalism. Having launched his tech journalism career as editor of Arabian Computer News in Dubai, he has since edited an array of tech and digital marketing publications, including Computer Business Review, TechWeekEurope, Figaro Digital, Digit and Marketing Gazette.

In the ever-evolving digital landscape, where AI is rapidly transforming the way we interact and communicate, WebUtility’s ChatGPT Prompt Generator emerges as a game-changer. This innovative tool empowers users to harness the full potential of ChatGPT, one of the most advanced language models developed by OpenAI.

At its core, the ChatGPT Prompt Generator is designed to simplify the process of crafting tailored prompts for ChatGPT. By leveraging the tool’s intuitive interface, users can effortlessly create prompts that align with their specific needs, whether they’re seeking assistance with customer support, content creation, or creative writing endeavors.

The beauty of this tool lies in its user-friendly approach. With just a few clicks, users can select the desired action, such as ‘Create’, ‘Explain’, ‘Analyse’ or ‘Write’, and then specify the focus area. This level of customization ensures that the generated prompts are contextually relevant and tailored to the user’s requirements.

But the true power of the ChatGPT Prompt Generator extends beyond mere convenience. By automating the prompt creation process, the tool saves users valuable time and effort, enabling them to engage with ChatGPT in a more efficient and productive manner. Gone are the days of generic or irrelevant responses — every conversation is now tailored to the user’s specific needs.

One of the standout features of this tool is its ability to understand natural language and adapt to various contexts. Powered by cutting-edge AI technology, the ChatGPT Prompt Generator ensures that the generated prompts are thoughtful, contextually appropriate, and designed to elicit meaningful responses from ChatGPT.

Whether you’re a business professional seeking to streamline customer interactions, a content creator looking to generate engaging material, or a writer exploring new creative avenues, the ChatGPT Prompt Generator is your ultimate companion. By harnessing the power of AI, this tool empowers you to unlock the limitless potential of ChatGPT and elevate your conversations to new heights.

For those seeking to explore the vast realm of AI tools further, the AI Tools Directory at AI Parabellum is a treasure trove of resources. This comprehensive directory curates a wide range of AI-powered tools, spanning various domains and applications, ensuring that users can find the perfect solution for their specific needs.

ChatGPT Prompt Generator: Unleashing the power of AI conversations Read More »

EU AI legislation sparks controversy over data transparency

Categories: Artificial Intelligence, Ethics & Society, Security,

As a tech journalist, Zul focuses on topics including cloud computing, cybersecurity, and disruptive technology in the enterprise industry. He has expertise in moderating webinars and presenting content on video, in addition to having a background in networking technology.

The European Union recently introduced the AI Act, a new governance framework compelling organisations to enhance transparency regarding their AI systems’ training data.

Should this legislation come into force, it could penetrate the defences that many in Silicon Valley have built against such detailed scrutiny of AI development and deployment processes.

The EU’s AI Act, intended to be implemented gradually over the next two years, aims to address these issues. New laws take time to embed, and a gradual rollout allows regulators the necessary time to adapt to the new laws and for businesses to adjust to their new obligations. However, the implementation of some rules remains in doubt.

One of the more contentious sections of the Act stipulates that organisations deploying general-purpose AI models, such as ChatGPT, must provide “detailed summaries” of the content used to train them. The newly established AI Office has announced plans to release a template for organisations to follow in early 2025, following consultation with stakeholders.

AI companies have expressed strong resistance to revealing their training data, describing this information as trade secrets that would provide competitors with an unfair advantage if made public. The level of detail required in these transparency reports will have significant implications for both smaller AI startups and major tech companies like Google and Meta, which have positioned AI technology at the center of their future operations.

Over the past year, several top technology companies—Google, OpenAI, and Stability AI—have faced lawsuits from creators who claim their content was used without permission to train AI models. Under growing scrutiny, however, some tech companies have, in the past two years, pierced their own corporate veil and negotiated content-licensing deals with individual media outlets and websites. Some creators and lawmakers remain concerned that these measures are not sufficient.

In Europe, differences among lawmakers are stark. Dragos Tudorache, who led the drafting of the AI Act in the European Parliament, argues that AI companies should be required to open-source their datasets. Tudorache emphasises the importance of transparency so that creators can determine whether their work has been used to train AI algorithms.

Conversely, under the leadership of President Emmanuel Macron, the French government has privately opposed introducing rules that could hinder the competitiveness of European AI startups. French Finance Minister Bruno Le Maire has emphasised the need for Europe to be a world leader in AI, not merely a consumer of American and Chinese products.

The AI Act acknowledges the need to balance the protection of trade secrets with the facilitation of rights for parties with legitimate interests, including copyright holders. However, striking this balance remains a significant challenge.

Different industries vary on this matter. Matthieu Riouf, CEO of the AI-powered image-editing firm Photoroom, compares the situation to culinary practices, claiming there’s a secret part of the recipe that the best chefs wouldn’t share. He represents just one instance on the laundry list of possible scenarios where this type of crime could be rampant. However, Thomas Wolf, co-founder of one of the world’s top AI startups, Hugging Face, argues that while there will always be an appetite for transparency, it doesn’t mean that the entire industry will adopt a transparency-first approach.

A series of recent controversies have driven home just how complicated this all is. OpenAI demonstrated the latest version of ChatGPT in a public session, where the company was roundly criticised for using a synthetic voice that sounded nearly identical to that of actress Scarlett Johansson. These examples point to the potential for AI technologies to violate personal and proprietary rights.

Throughout the development of these regulations, there has been heated debate about their potential effects on future innovation and competitiveness in the AI world. In particular, the French government has urged that innovation, not regulation, should be the starting point, given the dangers of regulating aspects that have not been fully comprehended.

The way the EU regulates AI transparency could have significant impacts on tech companies, digital creators, and the overall digital landscape. Policymakers thus face the challenge of fostering innovation in the dynamic AI industry while simultaneously guiding it towards safe, ethical decisions and preventing IP infringement.

In sum, if adopted, the EU AI Act would be a significant step toward greater transparency in AI development. However, the practical implementation of these regulations and their industry results could be far off. Moving forward, especially at the dawn of this new regulatory paradigm, the balance between innovation, ethical AI development, and the protection of intellectual property will remain a central and contested issue for stakeholders of all stripes to grapple with.

EU AI legislation sparks controversy over data transparency Read More »

Amazon will use computer vision to spot defects before dispatch

Categories: Amazon, Applications, Artificial Intelligence, Companies, Industries, Logistics,

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)

Amazon will harness computer vision and AI to ensure customers receive products in pristine condition and further its sustainability efforts. The initiative — dubbed “Project P.I.” (short for “private investigator”) — operates within Amazon fulfilment centres across North America, where it will scan millions of products daily for defects.

Project P.I. leverages generative AI and computer vision technologies to detect issues such as damaged products or incorrect colours and sizes before they reach customers. The AI model not only identifies defects but also helps uncover the root causes, enabling Amazon to implement preventative measures upstream. This system has proven highly effective in the sites where it has been deployed, accurately identifying product issues among the vast number of items processed each month.

Before any item is dispatched, it passes through an imaging tunnel where Project P.I. evaluates its condition. If a defect is detected, the item is isolated and further investigated to determine if similar products are affected.

Amazon associates review the flagged items and decide whether to resell them at a discount via Amazon’s Second Chance site, donate them, or find alternative uses. This technology aims to act as an extra pair of eyes, enhancing manual inspections at several North American fulfilment centres, with plans for expansion throughout 2024.

Dharmesh Mehta, Amazon’s VP of Worldwide Selling Partner Services, said: “We want to get the experience right for customers every time they shop in our store.

“By leveraging AI and product imaging within our operations facilities, we are able to efficiently detect potentially damaged products and address more of those issues before they ever reach a customer, which is a win for the customer, our selling partners, and the environment.”

Project P.I. also plays a crucial role in Amazon’s sustainability initiatives. By preventing damaged or defective items from reaching customers, the system helps reduce unwanted returns, wasted packaging, and unnecessary carbon emissions from additional transportation.

Kara Hurst, Amazon’s VP of Worldwide Sustainability, commented: “AI is helping Amazon ensure that we’re not just delighting customers with high-quality items, but we’re extending that customer obsession to our sustainability work by preventing less-than-perfect items from leaving our facilities, and helping us avoid unnecessary carbon emissions due to transportation, packaging, and other steps in the returns process.”

In parallel, Amazon is utilising a generative AI system equipped with a Multi-Modal LLM (MLLM) to investigate the root causes of negative customer experiences.

When defects reported by customers slip through initial checks, this system reviews customer feedback and analyses images from fulfilment centres to understand what went wrong. For example, if a customer receives the wrong size of a product, the system examines the product labels in fulfilment centre images to pinpoint the error.

This technology is also beneficial for Amazon’s selling partners, especially the small and medium-sized businesses that make up over 60% of Amazon’s sales. By making defect data more accessible, Amazon helps these sellers rectify issues quickly and reduce future errors.

Amazon will use computer vision to spot defects before dispatch Read More »

NLEPs: Bridging the gap between LLMs and symbolic reasoning

Categories: Artificial Intelligence, Development, Research,

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)

Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and execute Python programs to solve user queries, then output solutions in natural language.

While LLMs like ChatGPT have demonstrated impressive performance on various tasks, they often struggle with problems requiring numerical or symbolic reasoning.

NLEPs follow a four-step problem-solving template: calling necessary packages, importing natural language representations of required knowledge, implementing a solution-calculating function, and outputting results as natural language with optional data visualisation.

This approach offers several advantages, including improved accuracy, transparency, and efficiency. Users can investigate generated programs and fix errors directly, avoiding the need to rerun entire models for troubleshooting. Additionally, a single NLEP can be reused for multiple tasks by replacing certain variables.

Beyond accuracy improvements, NLEPs could enhance data privacy by running programs locally, eliminating the need to send sensitive user data to external companies for processing. The technique may also boost the performance of smaller language models without costly retraining.

However, NLEPs rely on a model’s program generation capability and may not work as well with smaller models trained on limited datasets. Future research will explore methods to make smaller LLMs generate more effective NLEPs and investigate the impact of prompt variations on reasoning robustness.

NLEPs: Bridging the gap between LLMs and symbolic reasoning Read More »

Scroll to Top