You are using an outdated browser. Please upgrade your browser to improve your experience
SoftBank Group Corp. late today confirmed that it will buy the chipmaker Ampere Computing LLC in a deal that’s valued at $6.5 billion
It said it expects the acquisition to close during the second half of the year
have both agreed to sell their stakes in the company
Ampere will operate as an independent subsidiary of SoftBank
It will continue to be based at its headquarters in Santa Clara
Founded in 2017 by former Intel President Renee James, the chipmaker specializes in Arm-based central processing units for data center servers
Its most powerful chip is the new AmpereOne M
The processor features up to 192 cores and higher memory bandwidth than its predecessors
enabling applications to move data to and from the random-access memory much faster
The company is also working on the development of an even more powerful chipset called Aurora that will feature up to 512 cores
It will also have a dedicated module for artificial intelligence
and ship with high-bandwidth memory or HBM
which is a new kind of high-speed RAM that’s widely used in AI accelerators
Ampere’s products are an alternative to the x86-based server chips sold by Intel Corp
The Arm-based architecture means they generally consume much less energy
SoftBank Chairman and Chief Executive Masayoshi Son said in a statement he’s buying Ampere
which employs about 1,000 semiconductor engineers
because the future of “artificial super intelligence” requires breakthrough computing power
“Ampere’s expertise in semiconductors and high-performance computing will help accelerate this vision
and deepens our commitment to AI innovation in the United States,” he added
SoftBank is already a major player in the chip market thanks to its stake in Arm Holdings Plc, which it acquired for $32 billion back in 2016. Arm went public via an initial public offering in 2023
but SoftBank still holds a majority stake in the company
Last July, SoftBank inked a deal to acquire another chipmaker called Graphcore Ltd.
Graphcore has built a specialized AI processor known as the Bow IP that’s based on wafer-on-wafer technology
It comprises two vertically stacked layers
with one containing the logic circuits and a second that holds the capacitors
which are components that can hold an electric charge
The capacitors deliver this electricity to the logic circuits more efficiently
It has been said that SoftBank may encourage Ampere to partner with Graphcore and build AI servers that include not only those accelerators
but also Arm-based CPUs that can help to coordinate AI workloads
OpenAI to make ChatGPT less creepy after app is accused of being 'dangerously' sycophantic
but its stock wobbles on light revenue forecast
Meta Platforms crushes Wall Street's earnings and revenue targets
Microsoft delivers impressive earnings beat
OpenText expands restructuring initiative after larger-than-expected sales drop
AI - BY JAMES FARRELL
INFRA - BY MIKE WHEATLEY
APPS - BY MIKE WHEATLEY
CLOUD - BY MIKE WHEATLEY
BLOCKCHAIN - BY DUNCAN RILEY
CLOUD - BY MARIA DEUTSCHER
Forgot Password?
ShareSaveCommentInnovationCloudSoftbank Is Acquiring Ampere: What Are The Implications?ByMatt Kimball
Forbes contributors publish independent expert analyses and insights. Senior Analyst, Serversfor Moor Insights and StrategyFollow AuthorMar 21
01:27pm EDTShareSaveCommentSoftbank's planned acquisition of Ampere raises some interesting questions about the market for ..
(Photo by PHILIP FONG/AFP via Getty Images)
The semiconductor market got a bit of a surprise this week as holding company Softbank announced its intent to acquire Arm CPU vendor Ampere Computing for $6.5 billion in cash
the deal is expected to close later in 2025
a few quarters after Softbank acquired AI chip startup Graphcore
which still owns a significant amount of Arm
I want to dig into this a little more and explore potential outcomes
The company’s goal seemed quite simple: develop the first commercially successful Arm-based server CPU to compete with Intel’s Xeon and the newly launched EPYC line from AMD
While there had been many attempts in previous years by companies such as Calxeda
Cavium (now part of Marvell) and even AMD (remember the Seattle project?) to drive the adoption of Arm in the datacenter
AWS and its broad deployment of Arm-based Graviton chips
combined with Arm’s release of its Neoverse architecture
brought acceptance and support from the ISV ecosystem
Supporting Arm was no longer an afterthought — it was given the same priority as x86 in the Linux and open source communities
The first Ampere Altra CPU launched in 2020
with Oracle Cloud Infrastructure as its first customer
(Oracle has about a 30% stake in Ampere.) In 2022
Ampere landed Microsoft Azure as cloud customer number two and secured a design win with HPE (for the ProLiant RL300)
The company landed deals with Google Cloud and several Chinese cloud providers in the following years
AWS’s Graviton has been somewhat of a blessing and a curse for Ampere
its success validated Arm and accelerated ecosystem support
This in turn motivated other CSPs and hyperscalers to experiment with and deploy Arm-based designs
what these hyperscalers learned next from Graviton changed the dynamic: It was more cost-effective to develop Arm-based server chips in-house than it was to buy them commercially
Even if the up-front capital investment was more expensive
the benefits of developing very tailored silicon that fit within a hyperscaler’s power-performance envelope delivered longer-term profitability
This is not a knock against the very fine commercial CPUs that Ampere designs and builds
it’s the same challenge CSPs have with x86 vendors
A CPU built to a general specification will never deliver the same environment-specific power-to-performance profile as a CPU explicitly designed for that environment
If CSPs like Azure can turn the knobs on a CPU design to deliver
that adds up to significant cost savings across the millions of CPUs deployed
Arm’s Neoverse architecture and its program for custom silicon solutions considerably lowered the barrier to designing chips in-house
This turn toward custom silicon — sometimes in-house and sometimes through third parties like Broadcom and Marvell — has created a challenge for Ampere
it has designed chips that would deliver great value in the enterprise (like Xeon and EPYC do)
but it doesn’t have an enterprise market ready to adopt them
it builds chips that could be an excellent value for CSPs — except that the economics point those potential customers to in-house design
it is certain that if x86 licensing weren’t what it is
we would see the same dynamic playing out there: Xeon and EPYC would have no market with the CSP and hyperscale communities.)
Azure launched its in-house Arm-based Cobalt CPU (and Maia accelerator)
Google Cloud launched its custom Axion chip
Chinese giant Ali Cloud launched its Aliyun CPU and the list goes on
it has been difficult for Ampere to maintain its footing
This took the company from exploring the potential for an IPO in 2022 to this week’s news that it’s being acquired by Softbank
One last note on this front: Softbank is getting a steal in Ampere
It is a world-class design shop with a strong executive team and a well-established go-to-market operation — all of the elements critical to success in its segment
why does Softbank acquire a company like Ampere with a great product
IP and people — but no significant addressable market at the moment
Softbank sees an opportunity in the custom silicon space
Combining the roughly 1,000 people who make up the design staff for Ampere with the approximately 500 people who make up Graphcore is a good start
If Softbank can make a couple of other strategic acquisitions to cover interconnects and networking
people and go-to-market muscle memory to compete effectively in relatively short order
Softbank has the central pieces in place to pivot to a commercial market quickly
That could be a market where Arm servers are deployed around the enterprise — or it could be a commercial market where Arm CPUs
XPUs and other silicon enable “black box” solutions like AI factories (or some other technology factory) with very specific performance characteristics
There is some speculation about whether this acquisition is lining up to support the recently announced Stargate Project
of which both Softbank and Oracle are a part
I don’t believe Softbank would make a $6.5 billion investment in CPUs that will play some yet-to-be-defined role in the project
wouldn’t Oracle be the better suitor in that case
Larry Ellison and his team could use Ampere for vertical integration in both Stargate and OCI
If in fact Stargate is one of the purchase drivers for Softbank
I think we are still looking at a custom silicon design to support what will be another hyperscale environment in Stargate
This would enable Softbank/Ampere to gain leverage and credibility with other potential customers
is there still opportunity for Ampere in the cloud
Azure and a few others are designing their own CPUs and accelerators
other hyperscalers want very bespoke compute platform but can’t or don’t want to hire entire teams to design the chips and manage their manufacture
The bigger question I have is whether Softbank (or any company) is able to acquire and assemble all of the pieces effectively to satisfy these potential customers
Buying the technology is actually the easy part
Bringing disparate teams and organizational cultures together is the more challenging part
The cloud and AI have been two significant disruptors in the market in so many different ways
Semiconductors are one of the affected segments where I am not sure anybody could have predicted that we would be where we are today
I can certainly say I never imagined I would be writing an article where I described designing a CPU as relatively easy
The acquisition of Ampere by Softbank is certainly not being made for charity or whimsical purposes
and I suspect it is tied to the custom silicon opportunity
New workloads in different operating environments require custom silicon — CPUs
training and inference accelerators — to deliver the best performance possible
Softbank has the beginnings of that custom IP shop
Whether Stargate is the first customer or not is another question
Moor Insights & Strategy provides or has provided paid services to technology companies
like all tech industry research and analyst firms
acquisition matchmaking and video and speaking sponsorships
Of the companies mentioned in this article
Moor Insights & Strategy currently has (or has had) a paid business relationship with AMD
SoftBank plans to acquire Ampere Computing for $6.5 billion
Ampere will operate as a wholly owned subsidiary of SoftBank Group and retain its name
while lead investors Carlyle and Oracle will sell their stakes
and is available on most major cloud providers
The company has focused on the high-end of the server market
with its AmpereOne Aurora sporting up to 512 Ampere Cores
it develops its own custom cores - which both gives it more control over the development of the chips
remains the single largest shareholder of Arm
which unsuccessfully sued Qualcomm over its own attempts to develop custom cores and reduce fees with its Nuvia acquisition
Arm is also planning to release its own server CPU
moving beyond the licensing model to develop products that compete with its customers - including Ampere
The company is rumored to be developing an AI chip
SoftBank has also acquired struggling AI chip company Graphcore, and is believed to be looking into setting up another AI chip venture for $100bn
The company is also the largest backer of OpenAI's Stargate (and an OpenAI funder itself)
the deal is expected to close in the second half of 2025
The Napoleon of tech marches toward the data center sector
“The future of Artificial Super Intelligence requires breakthrough computing power,” said Masayoshi Son
and deepens our commitment to AI innovation in the United States.”
added: “With a shared vision for advancing AI
we are excited to join SoftBank Group and partner with its portfolio of leading technology companies,” said
and we are excited to drive forward our AmpereOne roadmap for high performance Arm processors and AI.”
SoftBank was rumored to be looking to invest in Ampere alongside Oracle back in 2021 in a deal that would have valued it at $8 billion
the company confidentially filed for an initial public offering
Data Centre Dynamics Ltd (DCD), 32-38 Saffron Hill, London, EC1N 8FH Email. [email protected]DCD is a subsidiary of InfraXmedia
In a move that has a lot of the semiconductor and AI chip industry reacting by saying “Well
that’s interesting,” Softbank Group announced its plan to acquire ARM processor design company Ampere computing in a $6.5 billion cash deal
with the agreement of current investors Carlyle Group and Oracle
The deal is expected to close in the second half of 2025. By acquiring Ampere
SoftBank now owns both the underlying architecture (Arm) and one of the top companies designing Arm-based data center CPUs
This vertical integration play can give SoftBank a stronger position in the market for future data center processors
and fits with the company’s vision of being a major player in the next generation of data center
cloud, and AI dominated computing
which licenses the core architecture and instruction set for the RISC design
and its roughly 1,000 semiconductor engineers
will continue to operate as an independent unit within SoftBank and not be rolled up into the existing ARM business unit
we are excited to join SoftBank Group and partner with its portfolio of leading technology companies
and we are excited to drive forward our AmpereOne® roadmap for high performance Arm processors and AI
With Ampere’s focus on sustainable computing
and the advantages that ARM processors can offer in terms of performance-per-watt
the acquisition fits well in the current industry model of building more sustainable futures and meeting demands to be a green tech leader.
SoftBank can now offer end-to-end technology solutions
from edge devices (via Arm) to cloud-scale processors (via Ampere)
the demand for the processors was seen as being driven by the future of IoT
the focus has clearly moved to the data center as the energy efficiency of ARM-based designs meshes well with the demands of the data center industry
According to the announcement of the acquisition
the rationale behind the purchase is “aligned with SBG’s broader strategic vision and commitment to driving innovation in AI and compute.”
The future of Artificial Super Intelligence requires breakthrough computing power
Ampere’s expertise in semiconductors and high-performance computing will help accelerate this vision
and deepens our commitment to AI innovation in the United States
Ampere has announced that its next generation processor, the AmpereOne-3 is scheduled to launch in 2025. The processor is currently in the fabrication stage at chip foundry TSMC
The expectation is that this advanced CPU will feature 256 cores and be manufactured using TSMC's 3-nanometer (3nm) process technology
marking a significant evolution in Ampere's processor lineup
In addition to the increased core count and the use of the more efficient 3nm process
the new processors are expected to support 12 channels of DDR5 memory
potentially a 50% increase in memory bandwidth compared to their existing products
While processors such as the next generation Ampere lack the specialized AI accelerators like GPUs or TPUs
This makes them efficient candidates for running LLM inference in production environments
All of these tasks can benefit from the processor capabilities while delivering improved sustainability when compared with GPU or X86 based environments
The Ampere acquisition is not SoftBank’s first step into the AI data center processor market
having acquired custom AI silicon developer Graphcore in June 2024.
The company was one of the first to validate market need for non-GPU AI accelerators
They introduced their Intelligent Processing Unit (IPU)
a non-GPU design built from the ground up for AI and ML workloads
as well as a full stack AI solution that included their Poplar SDK software stack that allows ML models to run on the IPU.
it could be said that regardless of the technical interest in the IPU and it’s non-GPU design
the company ran straight into the buzzsaw that was the massive interest in NVIDIA and the AI GPUs released in the same time frame
Graphcore co-founder and CEO Nigel Toon was optimistic about the future of their technology after the SoftBank acquisition
This is a tremendous endorsement of our team and their ability to build truly transformative AI technologies at scale
as well as a great outcome for our company
Demand for AI compute is vast and continues to grow
There remains much to do to improve efficiency
and computational power to unlock the full potential of AI
we have a partner that can enable the Graphcore team to redefine the landscape for AI technology
Integrating Graphcore with the incoming Arm Holdings infrastructure has SoftBank looking to develop comprehensive AI solutions that combine Arm's energy-efficient designs with Graphcore's specialized AI accelerators
potentially enhancing performance in AI and machine learning applications.
SoftBank's strategy includes creating a network of AI data centers powered by Arm and Graphcore technologies
looking to provide alternatives to existing solutions and reduce reliance on dominant players
while capitalizing on the energy efficiency and improved sustainability metrics of data centers built on these technologies
The acquisition of Ampere and their design expertise fits well with this projected model
with the potential for some of the technologies being pioneered by Ampere to be integrated with the core Arm vision
while Ampere remains its own entity within the SoftBank ecosystem
The Ampere acquisition doesn’t stand alone
It is the latest and perhaps most strategic move in a broader chess game SoftBank is playing across the AI and data infrastructure landscape
the deal must be seen in context with SoftBank’s recent alignment with two other heavyweight players: OpenAI and Oracle
OpenAI unveiled plans for its Stargate project—a massive
multi-billion-dollar supercomputing campus set to come online by 2028
Stargate is expected to be one of the largest AI infrastructure builds in history
and Oracle will be the primary cloud provider for the project
SoftBank is playing a key financial and strategic role
helping OpenAI secure capital and compute resources for the long-term training and deployment of advanced AI models
is both an investor in Ampere and a major customer—one of the first hyperscale operators to go all-in on Ampere’s Arm-based CPUs for powering cloud services
With SoftBank now controlling Ampere outright
it gains a stronger seat at the table with both Oracle and OpenAI—positioning itself as an essential enabler of the AI supply chain from silicon to software
The Ampere deal gives SoftBank direct access to a custom silicon pipeline purpose-built for the kind of high-efficiency
high-throughput compute that AI inference and model serving demand at scale
Combine this with SoftBank’s ownership of Arm
the bedrock of energy-efficient chip design
and its portfolio now spans everything from the instruction set to the cloud instance
In a world where NVIDIA dominates AI training workloads
there’s growing appetite for alternatives in inference
combined with Graphcore’s AI acceleration tech and Arm’s global design ecosystem
sustainable AI infrastructure stack—not just for SoftBank’s own ambitions
but for partners like OpenAI and Oracle who are building the next layer of the internet
SoftBank isn’t just assembling parts
It’s building a vertically integrated platform for AI infrastructure—one that spans chip design
The Ampere acquisition solidifies that platform
may prove to be the most pivotal piece yet
and signing up for our weekly newsletters using the form below
Please enable JS and disable any ad blocker
As an investor, SoftBank is known for going big
That mentality came across clearly Wednesday evening, when the storied tech dealmaker announced it will acquire chip design company Ampere Computing
Ampere will operate as a wholly owned subsidiary of SoftBank Group
Of course, SoftBank knows quite a bit about the Arm ecosystem, having acquired British chipmaker Arm Holdings in 2016 for $32 billion
with Arm going public in 2023 and currently sporting a market cap around $124 billion
For SoftBank, the Ampere deal fits into a broader investment strategy around AI infrastructure. In the deal announcement, it cited other recent investments around this theme, including a partnership with OpenAI to develop advanced enterprise AI
Most prominently, SoftBank is a lead partner and backer for The Stargate Project, a multicompany initiative which is looking to spend $500 billion over the next four years to build out AI datacenters and infrastructure. SoftBank chief Masayoshi Son will serve as chairman of the project
SoftBank has also been stepping up activity for its Vision Fund. The fund regularly ranked among the most-active and highest-spending startup backers a few years ago, around the market peak. However, it scaled back sharply in later quarters, as many of its largest unicorn bets fared poorly
The fund also made a strategic investment of an undisclosed amount in cloud security startup Wiz in November. Should Google consummate its planned $32 billion purchase of the company
that ought to turn into a very profitable wager
Illustration: Dom Guzman
which sells the popular Cursor application
has reportedly raised a $900 million round at a $9 billion valuation
Global venture funding totaled $23 billion in April 2025
flat year over year and down significantly month over month from $68 billion invested in..
One of the most important and often overlooked areas VCs evaluate when considering an investment is a startup’s opportunity for distribution..
Discover and act on private market opportunities with predictive company intelligence
Editorial Partners: Verizon Media Tech
About Crunchbase News
Crunchbase News Data Methodology
Privacy Policy
Terms of Service
Cookie Settings
Do Not Sell or Share My Personal Info
CA Privacy Notice
Company
Careers
Partners
Blog
Contact Us
Crunchbase Pro
Crunchbase Enterprise
Crunchbase for Applications
Customer Stories
Pricing
Featured Searches And Lists
Knowledge Center
Create A Profile
Sales Intelligence
Sales Prospecting Guide
Sales Prospecting Tools
© 2025 Crunchbase Inc. All Rights Reserved.
Anton Bridge in Tokyo and Harshita Meenaktshi in Bengaluru; Editing by Alan Barona and Jamie Freed
Our Standards: The Thomson Reuters Trust Principles., opens new tab
, opens new tab Browse an unrivalled portfolio of real-time and historical market data and insights from worldwide sources and experts.
, opens new tabScreen for heightened risk individual and entities globally to help uncover hidden risks in business relationships and human networks.
© 2025 Reuters. All rights reserved
Company brings energy-efficient processors and partner ecosystem for next-generation telecom networks
Ampere® Altra® Family of processors uniquely suited to address growing market
2025 /PRNewswire/ -- Ampere announced today that it is accelerating its effort to address the telecom market
energy-efficient processors to next-generation RAN networks
the company is seizing a major market opportunity by enabling telecom providers to meet growing performance demands while lowering costs and energy use
According to Grandview Research, the global telecom services market size is estimated to reach $1.9 billion in 2024 and is projected to grow at a CAGR of 6.5% from 2025 to 2030
"Rising spending on the deployment of 5G infrastructures due to the shift in customer inclination toward next-generation technologies and smartphone devices is one of the key factors driving this industry," Grandview said
Ampere processors provide numerous advantages for the telecom market due to their high core density
which support the industry's growing need for high-performance and low power
Low power consumption and efficient thermal design make Ampere-based platforms ideal for a variety of edge form factors found in telecom network infrastructure
a key benefit for telcos managing large-scale deployments across many regions with unpredictable and rising energy costs
Ampere is strengthening its presence in the telecom sector to capitalize on this significant market opportunity
The company views this market as a natural extension of the Cloud Native workloads it already supports today
delivering Ampere's benefits to the edge and across distributed networks
Ampere is announcing new trials with global telecom customers as it expands its reach in this market
Ampere has also broadened the ecosystem of providers and suppliers using or supporting Ampere products
the company is announcing multiple advancements through collaboration with a variety of foundational partners in all aspects of the ORAN solution stacks
The collective set of partners are now ready for production deployments throughout the ORAN market within the calendar year 2025.
With an expanding ecosystem of partners now delivering Ampere-based products and services
Ampere is capitalizing on the immense opportunities in the telecom market
Cloud Native networks and AI-driven applications
power-efficient solutions offer a strategic advantage
By enabling telecom operators to optimize performance and lower energy and operating costs
Ampere is well-positioned to lead the industry's transition to more efficient and scalable telecom infrastructure
Visit our website for more information on Ampere at Mobile World Congress
Alexa Korkos: [email protected]
The AI Platform Alliance announced today the expansion of its consortium aimed at combining the key components required to operate a modern AI..
Artificial intelligence is rapidly expanding its presence in nearly every industry and in our daily lives
Semiconductors
Cloud Computing/Internet of Things
Computer & Electronics
Telecommunications Industry
Do not sell or share my personal information:
The move indicates a potential shift in Arm’s business model
Arm IP does not hold the same dominance in the data center space as it does in other sectors like mobile and automotive
Expanding into these markets as a processor designer could potentially place the company in direct competition with its main customers
There is no such problem in data centers because there is very little Arm-based CPU presence
but there is a trend of growing Arm-based CPUs in data centers sold by Ampere and Nvidia and used internally by Amazon
This would be a way to put Arm-based servers in competition with x86 servers from Intel and AMD
in a context where Intel is facing the worst crisis in its history
is particularly relevant in a context where energy consumption is a determining factor in data centers that can use thousands of them
and therefore has an immediate impact on costs
Japanese investment firm SoftBank aims to build a data center giant
SoftBank acquired UK-based AI ASIC maker Graphcore
as training large AI models requires a server with CPUs
Bringing Ampere into the fold with Graphcore would potentially give SoftBank two main pillars of AI in data centers
allowing it to capture a bigger share of the growing value in the data center processor business
Ampere has a unique position in the market
There is no other company with a similar value proposition
It was one of the first companies to bring Arm-based CPU processors to the market
What’s interesting is to see the transformation from the first generation of CPUs in 2020 using TSMC 7 nanometers – which was already advanced – to using TSMC 5 nanometers currently – which is even better – and it is pushing towards TSMC 3 nanometers in 2025
Ampere has a solid market position at a time when there is momentum building around the Arm-based CPU
Keep following Yole Group’s analyses in 2025 to see the impact of Arm’s strategy on the market
Hugo Antoine is a Technology & Market Analyst
Computing and Software at Yole Group.Hugo develops technology and market analyses covering computing hardware
and Artificial Intelligence (AI).He holds a master’s degree from Ecole des Mines de Saint-Etienne (France)
with a focus on microelectronics and computing at the Centre of Microelectronics in Provence (France)
he pursued an AI specialization at Ecole Polytechnique de Montreal (Canada)
he completed a dual-degree program in innovation management at emlyon business school
highlighting his expertise at the intersection of technology and business
Adrien Sanchez is Senior Technology & Market Analyst
Computing and Software at Yole Group.Adrien produces technology and market analyses covering computing hardware and software
machine learning and neural networks.Prior to Yole Group
where he focused on image recognition and comprehension for ADAS
He also worked at ACOEM (France) on real-time sound classification using deep learning and edge computing.Adrien graduated with a double degree at Grenoble Institute of Technology PHELMA (Grenoble INP Phelma
France) and Grenoble Ecole de Management (GEM
and he earned an MSc on AI at Heriot-Watt University (Edinburgh
As well as the Processor Market Monitor.
Source: yolegroup.com
Carlyle exits Ampere in SoftBank’s $6.5bn AI chip acquisition
founded by former Intel executive Renee James
specializes in processors for data centers
including technology used by chip designer Arm Holdings
The company was previously valued at over $8bn in 2021 when SoftBank considered a minority investment
This acquisition provides SoftBank with access to one of the few remaining independent design teams for high-performance data center chips
an increasingly competitive sector driven by the AI boom
The deal ensures Ampere remains a key player in developing energy-efficient processors for large-scale data centers
a critical need as AI computing demands accelerate
SoftBank CEO Masayoshi Son emphasized that the future of AI requires breakthrough computing power
and Ampere’s expertise in semiconductors and high-performance computing aligns with this vision
Ampere will continue operating as a wholly owned subsidiary of SoftBank while maintaining its headquarters in Santa Clara
the sale of Ampere follows its broader trend of exiting technology investments as market conditions evolve
The deal underscores the growing influence of private equity in semiconductor transactions
particularly as firms position themselves amid surging demand for AI infrastructure
Source: The Edge
If you think we missed any important news, please do not hesitate to contact us at news@pe-insights.com
Subscribe to our Newsletter to increase your edge
through our newsletter you’ll receive weekly access to what is happening
By signing up for our newsletter, you accept our terms and conditions as outlined under pe-insights.com/privacy-policy.
according to filings.Reporting by Kanjyik Ghosh and Anirban Sen; Editing by Subhranshu Sahu
Metrics details
The electrochemical reduction of carbon dioxide (CO2) to carbon monoxide (CO) is challenged by a selectivity decline at high current densities
Here we report a class of indigo-based molecular promoters with redox-active CO2 binding sites to enhance the high-rate conversion of CO2 to CO on silver (Ag) catalysts
Theoretical calculations and in situ spectroscopy analyses demonstrate that the synergistic effect at the interface of indigo-derived compounds and Ag nanoparticles could activate CO2 molecules and accelerate the formation of key intermediates (*CO2– and *COOH) in the CO pathway
Indigo derivatives with electron-withdrawing groups further reduce the overpotential for CO production upon optimizing the interfacial CO2 binding affinity
By integrating the molecular design of redox-active centres with the defect engineering of Ag structures
we achieve a Faradaic efficiency for CO exceeding 90% across a current density range of 0.10 − 1.20 A cm–2
The Ag mass activity toward CO increases to 174 A mg–1Ag
This work showcases that employing redox-active CO2 sorbents as surface modification agents is a highly effective strategy to intensify the reactivity of electrochemical CO2 reduction
a Reaction mechanism of redox-active Id for CO2 capture
The experiments were conducted by dissolving 2.5 mM Id in dimethyl sulfoxide (DMSO) with 0.1 M tetrabutylammonium hexafluorophosphate (TBAPF6) at a scan rate of −20 mV s–1
Ferrocene/ferrocenium (Fc0/+) was used as an internal reference
c DFT-optimized structure of Id-2CO22– adduct showing the bent CO2 configuration at the redox-active oxygen center
The corresponding bond angles and lengths are listed next to the structure
e Comparison of CO FE as a function of jtotal (d)
and jCO as a function of potential (e) for AgNP and AgNP+Id
The flow cell was operated with 1 M KOH (pH = 14.0 ± 0.2)
from measurements of three independent electrodes
f Adsorption configuration of the *CO2– intermediate at the Ag/Id interface and comparison of *CO2– adsorption energies for AgNP and AgNP+Id
g Potential-dependent ATR-SEIRAS contour map for AgNP and AgNP/Id
where R and R0 are spectra collected at the sample potential and the open circuit potential
Source data are provided as a Source Data file
through a combination of precise molecular design
we systematically investigate a series of indigo derivatives with varying CO2 affinities to unravel the interplay between electro-activated CO2 sorbents and the catalytic performance at the Ag/organic interface
by further immobilizing CO2-binding moieties into a macromolecular structure and interfacing them with defect-rich Ag particles on a carbon support
the resulting optimized hybrid catalyst markedly improves CO2RR reactivity and Ag mass activity toward CO
steady CO FEs of over 90% can be reached at ampere-level current densities up to nearly 1.2 A cm–2
accompanied by a notable Ag mass activity of 174 A mg–1Ag toward CO production
we hypothesize that the introduction of CO2-binding Id to Ag catalysts could effectively active CO2 by weakening its C=O bond strength during CO2RR
thereby facilitating the subsequent reductive transformation
These results suggest that Id modification by physical mixing has minimal impact on the morphology or electronic structure of AgNP
excluding surface roughness as the contributor to the improved CO2RR performance
we observe intensified ν(*COOD) bands emerging at lower overpotentials for AgNP+Id
implying accelerated kinetics of *COOD formation
the in situ ATR-SEIRAS results support that the redox-active Id molecule can significantly facilitate CO2 activation to form key intermediates at the Ag/Id interface
ultimately leading to promoted CO generation
including 5,5′,6,6′-tetramethoxylindigo (TMId)
to investigate their impact on Ag-catalyzed CO2RR
a Linear relationship between the onset potentials for CO2 capture in aprotic and aqueous electrolytes for indigos functionalized with EDGs or EWGs
c Comparison of CO FE at different jtotal (b)
and jCO at different potentials (c) for AgNP modified with various indigos
d Correlation between the CO2RR potential at a jtotal of ~ 30 mA cm–2 for modified AgNP catalysts and the onset potential for CO2 capture by various indigos
The best CO2RR performance is achieved with DCId
whose CO2 affinity is the weakest among the indigo derivatives
AgNP+DCId attains a jCO of 273 mA cm–2 at –0.54 V vs
which is ~ 2.6 times higher than that of AgNP+TMId (the derivative with the strongest CO2 affinity) at a similar potential
with AgNP+DCId exhibiting the most favorable *CO2– adsorption energy toward CO formation in our case
These results rule out other possible factors as main contributors to the enhanced CO2RR performance of AgNP modified with indigo derivatives
HBTU: 2-(1H-benzotriazol-1-yl)−1,1,3,3-tetramethyluronium hexafluorophosphate; DIPEA: N,N-diisopropylethylamine; DMF: N,N-dimethylformamide
The peaks are assigned to the corresponding carbons labeled in (a)
and jCO as a function of potential (e) for AgNP
we prepared a carbon-supported Ag catalyst with abundant surface defects (D-Ag/C) via in situ electrodeposition (see Methods for synthesis details)
a HAADF-STEM and STEM-EDS mapping of the D-Ag/C catalyst showing isolated Ag particles on carbon support
b HRTEM image of the D-Ag/C catalyst showing abundant planar defects (denoted by arrows)
d XANES data (c) and Fourier transform magnitudes of the k2-weighted EXAFS spectra (d) at the Ag K-edge of AgNP and D-Ag/C
The spectra of Ag foil were shown as a reference
e The Ag−Ag coordination numbers obtained by theoretical fits to the EXAFS data
a FE and cell voltage at different jtotal
b jCO and EE for CO at different cell voltages without iR compensation
c CO2RR stability at a jtotal of 400 mA cm–2 when operating the MEA cell at 50 °C using 0.2 M CsOH (pH = 13.2 ± 0.1) as the anolyte
using redox-active indigo molecules as a model system
we demonstrate that decorating Ag-based catalysts with electro-activated CO2-binding organics creates synergistic interfaces that significantly enhance both the selectivity and activity of CO2-to-CO conversion
The dynamic complexation interactions between these organic promoters and CO2 readily activate CO2 molecules and effectively enrich the *CO2– and *COOH intermediates at the nearby Ag catalytic sites
by precisely tuning the CO2 binding affinities of indigo derivatives via molecular engineering
we unravel a critical volcano-like relationship between the *CO2– adsorption energy induced by organic modifiers and their promotional effect on CO2RR
This mechanistic insight culminates in the development of a hybrid catalyst that couples polymerized indigo moieties with the optimal CO2 affinity and highly dispersed
achieving impressive CO2RR performance at ampere-level current densities
we anticipate that this redox-active molecular platform can be extended to benefit CO2 electrolysis to high-value multi-carbon products when incorporated into copper-based catalysts
Our work opens new avenues for the rational design of highly efficient
the integration of CO2-binding species with CO2RR catalysts could potentially enable a desirable reactive carbon capture scheme where CO2 from dilute sources is directly converted into chemicals and fuels without prior concentration
offering advantages in process intensification and energy efficiency
99.9%) were obtained from Thermo Scientific
Indigo powder (97.0%) was purchased from TCI America
All chemicals are used as received without purification
Synthesis procedures for indigo derivatives are provided in Supplementary Note 1
P-Id was prepared via amidation polymerization
2.16 mmol) in anhydrous DMF (30 ml) was added DIPEA (2.09 ml
The mixture was stirred for 30 mins before DODA (2 mmol
The reaction was stirred for another 24 hours at room temperature (20 °C) and poured into a saturated NaHCO3 aqueous solution (200 ml)
The precipitate was collected by filtration
washed sequentially with saturated NaHCO3 (aq)
and dried in vacuo to give a dark blue solid (784 mg
The D-Ag/C catalyst was synthesized using the in situ electrodeposition method
Vulcan XC-72 carbon black suspension (10 mg ml–1) was added to AgNO3 aqueous solution (0.33 mg ml–1) and sonicated in an ice bath for 2 h to allow the uniform dispersion of Ag+ on the carbon black support
The resulting slurry was added to isopropanol with 5 wt% Nafion solution to obtain the catalyst ink
which was then spray-coated on GDL (Sigracet 39BB) until reaching a total mass loading of 0.3 mg cm–2
The D-Ag/C catalyst was formed in situ by reducing the as-prepared GDE at 30 mA cm–2 for 15 min in the flow cell supplied with CO2 gas and 1 M KOH
The final Ag loading on the GDE was ~ 6 μg cm–2 measured by ICP-OES
The theoretical EXAFS signal was fitted to the experimental EXAFS data in R-space by Fourier transforming both the theoretical and experimental data
CV measurements were carried out on a BioLogic VSP potentiostat (BioLogic Science Instruments)
Glassy carbon (3 mm in diameter) was used as the working electrode
and Pt wire was used as the counter electrode
For measurements conducted in aprotic electrolytes
Ag wire was used as a pseudo-reference electrode
and ferrocene was used as an internal reference
The organic compound (2.5 mM) was dissolved in DMSO with 100 mM TBAPF6 as the supporting salt
For measurements conducted in aqueous electrolytes
Ag/AgCl (3 M KCl) was used as the reference electrode
The reference electrode was calibrated using a standard hydrogen electrode before measurements
The compound was mixed with Vulcan XC-72 (mass ratio of 1:1) in isopropanol and drop-casted onto the glassy carbon electrode
The pH of the electrolyte (1 M KHCO3 saturated with either N2 or CO2) was measured by a pH meter (SevenCompact)
The potential was converted to the RHE scale using ERHE = EAg/AgCl + 0.209 V + 0.0591 × pH
CV curves were collected at the scan rate of –20 and –10 mV s–1 for the aprotic and aqueous conditions
and cell setup were initially activated at 30 mA cm–2 for 2 h before starting the performance measurements
The cell voltage was recorded without the iR correction
Shimadzu GC-2014) equipped with a thermal conductivity detector was employed to monitor the gas products
The FEs of gas products were calculated as follows:
where z is the number of electrons transferred to form a target product; F is the Faraday constant; x is the molar fraction of a target product determined by GC; V is the molar flow rate of effluent gas measured using a digital flow meter (Omega); and jtotal is the total current density
The catalyst ink was drop-casted onto a silicon ATR wafer with a thermal-evaporated Ag film (30 nm)
The catalyst loading was controlled to be 0.4 mg cm–2
Graphite rod and Ag/AgCl (3 M KCl) were used as the counter and reference electrodes
The SEIRAS spectra were recorded using a Thermo Fischer Nicolet iS50 spectrometer equipped with an N2 cooled HgCdTe (MCT) detector and a Veemax III IR attachment from PIKE
The spectrometer was operated at a scan rate of 30 kHz
Spectra were acquired with a spectral resolution of 4 cm–1
and 16 interferograms were coadded for each spectrum
the electrolyte (0.1 M potassium phosphate in D2O) was continuously sparged with CO2
CO2 electrolysis was carried out at potentials ranging from –0.1 to –1.0 V vs
The spectrum collected at open circuit potential was used as the reference
All the data that support the findings of this study are available in the main text and the Supplementary Information. Data are also available from the corresponding author upon request. Source data are provided in this paper
Advances and challenges in understanding the electrocatalytic conversion of carbon dioxide to fuels
Advances and challenges for the electrochemical reduction of CO2 to CO: from fundamentals to industrialization
What would it take for renewably powered electrosynthesis to displace petrochemical processes
Techno-economic assessment of low-temperature carbon dioxide electrolysis
Plasmonic photosynthesis of C1–C3 hydrocarbons from carbon dioxide assisted by an ionic liquid
Mechanistic insights into electrochemical reduction of CO2 over Ag using density functional theory and transport models
Electrocatalytic conversion of carbon dioxide to methane and methanol on transition metal surfaces
Opportunities and challenges in CO2 reduction by gold- and silver-based electrocatalysts: from bulk metals to nanoparticles and atomically precise nanoclusters
Molecular enhancement of heterogeneous CO2 reduction
Molecular tuning for electrochemical CO2 reduction
Ionic-liquid-based CO2 capture systems: structure
Direct oxygen-containing simulated flue gas electrolysis over amine-confined Ag catalyst in a flow cell
CO2 electrolysis via surface-engineering electrografted pyridines on silver catalysts
A bifunctional ionic liquid for capture and electrochemical conversion of CO2 to CO over silver
Direct sp3 C–H bond activation adjacent to nitrogen in heterocycles
Indigo as a low-cost redox-active sorbent for electrochemically mediated carbon capture
Redox-tunable Lewis bases for electrochemical carbon dioxide capture
Redox-tunable isoindigos for electrochemically mediated carbon capture
Electrifying carbon capture by developing nanomaterials at the interface of molecular and process engineering
Electrochemically mediated carbon dioxide separation with quinone chemistry in salt-concentrated aqueous media
Electrochemically induced CO2 capture enabled by aqueous quinone flow chemistry
Molecular enhancement of direct electrolysis of dilute CO2
gold and silver electrodes without metal cations in solution
Probing electrolyte effects on cation-enhanced CO2 reduction on copper in acidic media
Infrared analysis of interfacial phenomena during electrochemical reduction of CO2 over polycrystalline copper electrodes
Probing the reaction mechanism of CO2 electroreduction over Ag films via operando infrared spectroscopy
In situ infrared spectroscopic evidence of enhanced electrochemical CO2 reduction and C–C coupling on oxide-derived copper
On the origin of the elusive first intermediate of CO2 electroreduction
Oxygen-stable electrochemical CO2 capture and concentration with quinones using alcohol additives
Understanding trends in the electrocatalytic activity of metals and enzymes for CO2 reduction to CO
Photoelectron spectra and electronic structures of some indigo dyes
Carbon nanotube containing Ag catalyst layers for efficient and selective reduction of carbon dioxide
Promoting ethylene production over a wide potential window on Cu crystallites induced and stabilized via current shock and charge delocalization
Fast operando spectroscopy tracking in situ generation of rich defects in silver nanocrystals for highly selective electrochemical CO2 reduction
Selective increase in CO2 electroreduction activity at grain-boundary surface terminations
electrocatalytic CO2 reduction to ethylene beyond 1,000 h stability at 10 A
Protocol for assembling and operating bipolar membrane water electrolyzers
Electrodeposition of hierarchically structured three-dimensional nickel–iron electrodes for efficient oxygen evolution at high current densities
Voltage loss diagnosis in CO2 electrolyzers using five-electrode technique
HEPHAESTUS: data analysis for X-ray absorption spectroscopy using IFEFFIT
High carbonate ion conductance of a robust PiperION membrane allows industrial current density and conversion in a zero-gap carbon dioxide electrolyzer cell
Investigation of electrolyte-dependent carbonate formation on gas diffusion electrodes for CO2 electrolysis
Electrochemical C–N bond formation within boron imidazolate cages featuring single copper sites
Pulsed-potential electrolysis enhances electrochemical C–N coupling by reorienting interfacial ions
Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set
Generalized gradient approximation made simple
A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu
Origin of selective production of hydrogen peroxide by electrochemical oxygen reduction
Implicit solvation model for density-functional study of nanocrystal surfaces and reaction pathways
Implicit self-consistent electrolyte model in plane-wave density-functional theory
Download references
We acknowledge financial support from the Johns Hopkins University
and the National Science Foundation (NSF grant number 2237096)
This work was partially performed at the Materials Characterization and Processing Center in the Whiting School of Engineering at Johns Hopkins University
are grateful for the support of the Ralph S
O’Connor Sustainable Energy Institute (ROSEI)
and Yuanyue Liu acknowledge the support by Welch Foundation (F-1959)
and the computational resources provided by ACCESS and NREL
acknowledges support from IIT Kanpur (Project number 2024098) and the Chandrakanta Kesavan Center for Energy Policy and Climate Solutions
would like to acknowledge CONAHCyT for the doctoral scholarship provided under the program (CVU1051087)
acknowledge support by the NSF grant CHE 2102299
The work carried out at Brookhaven National Laboratory was supported by the DOE under contract DE-SC0012704
XAS measurements used resource 7-BM of the National Synchrotron Light Source II
a DOE Office of Science User Facility operated for the DOE Office of Science by Brookhaven National Laboratory under contract DE-SC0012704
The 7-BM beamline operations were supported in part by the Synchrotron Catalysis Consortium (DOE Office of Basic Energy Sciences grant DE-SC0012335)
These authors contributed equally: Zhengyuan Li
Department of Chemical and Biomolecular Engineering
Texas Materials Institute and Department of Mechanical Engineering
Department of Materials Science and NanoEngineering
Department of Materials Science and Chemical Engineering
Department of Materials Science and Engineering
Department of Sustainable Energy Engineering
conceptualized the project under the supervision of Yayuan Liu; Z.L
synthesized and characterized organic compounds
performed DFT calculations under the supervision of Yuanyue Liu; C.S.G and Z.L
conducted in situ ATR-SEIRAS measurements under the supervision of V.S.T.; S.R.
All authors discussed the results and commented on the paper
The authors declare no conflict of interest
and the other anonymous reviewer(s) for their contribution to the peer review of this work
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations
Download citation
DOI: https://doi.org/10.1038/s41467-025-58593-w
Anyone you share the following link with will be able to read this content:
a shareable link is not currently available for this article
Sign up for the Nature Briefing newsletter — what matters in science
there are dark clouds on the horizon for Ampere's long history in the clouds
and Google is putting increased pressure on Ampere
The article shows a total market share decrease by Ampere after years of increases and with representation across all the major cloud providers except AWS
Ampere dropped from over 20% of the Arm market in the cloud to 18.2% after the recent introduction of Azure Cobalt and Google Axion
AWS's Arm-based Graviton instances also continue to expand
market intelligence analysts can see an average of 30% savings by using Ampere in lieu of AMD or Intel semiconductors
though they can find more detailed information
and Google with their own Arm-based instances is putting increased pressure on Ampere
"Azure currently offers better performance and is 8.8% cheaper than Ampere."
The information in this Ampere article provides a solid beginning for market intelligence analysts forecasting
or considering the pros and cons of choosing instances by Ampere Computing for their enterprise
Liftr has been providing data to customers ranging from Financial Services firms making investment decisions to semiconductor vendors
About Liftr Insights Liftr Insights generates reliable market intelligence using unique data
Liftr Insights cloud and semiconductor data cover the globe and represent census data for over 75% of the public cloud
Liftr Insights subject matter experts translate company-specific service provider data into actionable alternative data
Liftr and the Liftr logo are registered service marks of Liftr Insights
The following are trademarks and/or service marks of Liftr Insights: Liftr Insights
The following are registered intellectual property marks
or service marks of their respective companies: Amazon Web Services Microsoft Azure Alibaba Cloud Google Cloud Oracle CloudTencent CloudCoreWeaveLambdaVultrIntel Corporation Ampere ComputingQualcommNVIDIAAMD ARM SoftBank
a pioneer in market intelligence driven by unique data
released an article about Tabular AI Models as part of its AI Series showing..
released an article about Azure showing that 33% of all instances deployed by ..
Oil & Energy
Utilities
is reportedly getting close to acquiring chipmaker Ampere Computing LLC in a deal that would value the company at about $6.5 billion
Bloomberg, referencing people familiar with the matter, reported today that talks between the companies are at an advanced stage and that a deal may be announced in the coming weeks
The same source did note that though the talks are advanced
Reports that Ampere was interested in being acquired first emerged in September. Forward to Jan. 9 and it was reported that SoftBank and Arm Holdings plc
which is majority-owned by SoftBank were interested and were exploring a potential deal
Arm and Ampere have not commented on the latest reports
Founded in 2017 by former Intel President Renee James
energy-efficient processors for cloud computing and artificial intelligence workloads
Ampere’s products include the Ampere Altra and AmpereOne processor families
The Ampere Altra series is ideal for applications such as edge computing and large-scale cloud deployments
whereas the AmpereOne family delivers enhanced performance for demanding cloud-native and AI tasks
it would significantly boost SoftBank’s position in the semiconductor industry
particularly in the data center and AI markets
SoftBank would be able to offer integrated solutions that combine Arm’s architecture with Ampere’s advanced chip designs
giving the combined companies a potential edge against industry leaders such as Intel Corp
Notably, SoftBank also owns Graphcore Ltd.
SoftBank might encourage Ampere to launch a go-to-market partnership with Graphcore
Ampere’s tech combined with Graphcore could also allow SoftBank to enter the server market with an offering that combines Graphcore’s Bow IPUs and Ampere CPUs
Forgot Password?
discover how this pure player in intelligent and electric mobility is reshaping the industry and defining a promising future for net-zero mobility
In November 2023, Renault Group reached a major milestone with the creation of Ampere
an entity exclusively dedicated to intelligent electric vehicles (iEVs)
This accolade affirmed Ampere’s ability to produce attractive and competitive vehicles for the European market
The Geneva Motor Show also marked the launch of Renault 5 E-Tech Electric
With cutting-edge technologies and an electric powertrain built on the new AmpR Small platform
the reimagining of Renault 5 generated unprecedented enthusiasm
designed to popularize electric cars in the European market
underscores Renault Group’s transformation into a next gen automotive company:
« To develop this car in just three years in France
With Twingo E-Tech Electric slated for market release in 2026
Ampere continues to democratize electric vehicles in Europe
this city car will be priced below €20,000
In just one year, Ampere has proven its ability to innovate in the design and manufacturing of electric vehicles under the Renault brand. Other brands, including Nissan, Alpine, and Mitsubishi, have also sought Ampere’s expertise for developing new electric vehicles: Alpine for the A290 and A390
Nissan for its Compact EV and potentially a new A-segment electric vehicle
and Mitsubishi for its upcoming electric C-SUV
These collaborations highlight Ampere’s role as a technological platform
capable of meeting the challenges of electric mobility efficiently and swiftly
To further optimize its development processes
Ampere now relies on the Advanced China Development Center (ACDC)
This entity collaborates closely with Chinese partners
leveraging their ecosystem to enhance Ampere’s global competitiveness
As it celebrated its first anniversary, Ampere unveiled its Renault Emblème demo car, embodying a vision for a fully decarbonized family vehicle "from cradle to grave." The vehicle reduces its overall lifecycle greenhouse gas emissions (CO2e) by 90% compared to a similar model produced today. Emblème reflects Ampere’s commitment to eco-design, incorporating recycled materials and advanced technologies to deliver a vehicle that is comfortable, high-performing, and environmentally friendly.
Over the past twelve months, Ampere has accelerated vehicle development and production timelines, demonstrated its technological platform expertise through collaborations, announced the integration of LFP and Cell-to-Pack technologies, prepared the launch of its first Software-Defined Vehicle, and revealed a demo car showcasing its ambition to achieve net-zero carbon by 2035.
The deal is part of the investment giant’s push into AI infrastructure and follows its recent multi-billion-dollar investment in OpenAI
The purchase of Ampere is part of Softbank's push further into chip design. The investment firm called out Ampere's expertise in the development of lightweight and fast Arm-based processors
The processors were initially built by chip designer Arm Holdings
"The future of Artificial Super Intelligence requires breakthrough computing power,” Masayoshi Son, chairman and CEO of SoftBank, said in a statement
current majority shareholders Carlyle and Oracle will sell their stakes in Ampere
Softbank has been leveraging external partnerships to forge ahead on AI
with an emphasis on OpenAI and its wildly popular ChatGPT
In February, SoftBank and OpenAI announced plans to collaborate on a new enterprise AI system called “Cristal intelligence,” which will integrate individual data enterprises onto one platform
SoftBank also agreed to invest $3 billion to deploy OpenAI's software across its group of companies
The partnership came weeks after SoftBank announced plans to participate in a $500 billion investment project in OpenAI's U.S. infrastructure over the next four years
The company is serving as an equity partner in the project alongside OpenAI
Ampere was founded in 2018 and now operates across nine locations in North America, Europe and Asia, according to its website
The company has been losing revenue in recent years
posting a $510 million operating loss in 2024 and making it more vulnerable to a takeover
Get the free daily newsletter read by industry experts
Retrieved from IAM District 751
The aircraft maker is in the midst of bargaining with two of its unions
Experts say the company's litany of quality problems give unions the upper hand
The technology can make it easier for companies to use 3D printing and enhance the design process.
Subscribe to Manufacturing Dive for top news
Want to share a company announcement with your peers
The free newsletter covering the top industry headlines
Ampere, the Oracle-backed semiconductor designer
Bloomberg News reported at the time that Ampere
which designs semiconductors using Arm’s technology
was valued at $8 billion in a proposed minority investment by Japan’s SoftBank in 2021
Representatives of Arm and Ampere declined to comment
Spokesmen for SoftBank and Oracle did not immediately respond to requests for comment
Ampere is working with a financial advisor to explore acquisition interest. This was reported by Bloomberg in September
The company expressed interest in a deal with a larger player in the industry
It suggests that it did not see an easy path to an IPO
A deal for Ampere would add to a wave of chip companies looking to capitalize on the AI spending boom
Oracle said last year that it owns 29% of the startup and may exercise future investment options to give it control of the chipmaker
Although Ampere could benefit from the ongoing AI hype
Several large technology companies are rushing to develop the same types of chips that Ampere makes
While there is much interest in controlling key components as the data center industry refocuses on the AI era
like larger competitors Intel and Advanced Micro Devices
must respond to a shift in spending from central processing units (CPUs) to Nvidia Corp.’s accelerator chips
Ampere makes processors for data center equipment using Arm’s technology
Arm is increasingly moving from a licensor of fundamental standards and basic designs to a more complete chipmaker
many of whom worked for Intel’s former market-leading server chip division
could add expertise and momentum to CEO Rene Haas’ push into that market
the company said it had confidentially filed for an initial public offering (IPO) in the US
at a time when demand for chips was rising rapidly
A sale of Ampere would continue its series of semiconductor deals
Global deals with chip companies more than doubled to more than $31 billion last year
Techzine focusses on IT professionals and business decision makers by publishing the latest IT news and background stories
The goal is to help IT professionals get acquainted with new innovative products and services
but also to offer in-depth information to help them understand products and services better
© 2025 Dolphin Publications B.V.All rights reserved
SoftBank first expressed interest in the chipmaker back in 2021
Arm and its owners SoftBank are reportedly looking to acquire Ampere Computing
According to a report from Bloomberg
“strategic options” are being explored with regards to a proposed takeover
SoftBank first expressed an interest in acquiring Ampere in 2021. However, after the chipmaker filed for an IPO the following year and documents revealed that Oracle had invested in the company to the tune of $426 million
SoftBank did not move forward with the purchase
Oracle has since invested $600 million in convertible debt in Ampere during the fiscal year ending on May 31, 2024, and $400m in the 2023 fiscal year. In September 2024, Oracle announced it owned 29 percent of Ampere
with its debt financing set to mature in January 2026
That same September, it was reported that Ampere was working with a financial advisor to explore a potential sale
Founded in 2017 by CEO Renée James and a group of her former Intel colleagues
Ampere designs chips specifically for servers based on the Arm architecture
which has grown in popularity among data center operators in recent years
Initially based on Arm’s Neoverse blueprints
and SoftBank all declined Bloomberg’s requests for comment
Data Centre Dynamics Ltd (DCD), 32-38 Saffron Hill, London, EC1N 8FH Email. [email protected]DCD is a subsidiary of InfraXmedia
Ampere was considering an IPO back in April 2022 but eventually shelved the plan
There have also been rumours that Arm might acquire Ampere or that Intel Corp
Now SoftBank is in talks valuing Ampere at about US$6.5 billion including debt
according to unnamed sources cited by Bloomberg
A deal could be announced in a matter of weeks
Oracle owns 29 percent of Ampere with an option to increase its holding to take control
Ampere nor Arm commented on the possibility of a deal
www.softbank.com
www.amperecomputing.com
Ampere plans 3nm 256 core AI chip, teams with Qualcomm
Arm server processor startup Ampere files for IPO
This website is using a security service to protect itself from online attacks
The action you just performed triggered the security solution
There are several actions that could trigger this block including submitting a certain word or phrase
You can email the site owner to let them know you were blocked
Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page
As the demand for computing power continues to skyrocket
so does the challenge of balancing performance with power efficiency
On the latest episode of Arm Viewpoints
host Brian Fuller sits down with Jeff Wittich
Chief Product Officer at Ampere Computing
to explore how Ampere is tackling some of the most pressing issues in modern computing
Ampere Computing has disrupted the market with its innovative ARM-based processors
designed specifically for cloud and edge environments
Wittich shares how the company’s mission—delivering high-performance
power-efficient compute solutions—has positioned it as a trailblazer in an industry dominated by x86 architectures
This episode of Arm Viewpoints offers a fascinating look into the future of computing
highlighting how Ampere is bridging the gap between innovation and sustainability
Catch the full episode to learn how Ampere Computing is shaping the future of compute environments—from data centers to the edge
Call to Action:Listen to the full podcast on Arm Viewpoints and discover how Ampere is powering the next wave of technological innovation!
Before joining Ampere, Jeff held senior leadership roles at Intel Corporation, where he was instrumental in developing five generations of Xeon processors and growing Intel’s Cloud Platform Business revenue sixfold. His expertise spans product development, market strategy, and aligning cutting-edge technologies with evolving customer needs.
Jeff’s relentless focus on innovation has helped redefine the semiconductor industry, ensuring processors meet the dynamic demands of hyperscale cloud computing and AI workloads. Today, he leads a world-class team at Ampere, pushing the boundaries of what’s possible in sustainable, high-performance computing.
Deployment [01:00] Ampere’s product strategy, including innovations in chip architecture and AI acceleration. A real-world case study of Space Tech’s implementation of ampere processors for Edge AI applications, and much, much more. So now we bring you Jeff Witt. So, Jeff, welcome. Thanks for taking the time. Yeah, glad to be here.
Thank you very much. Catch us up, if you will, on Ampere, which burst onto the scene. I don’t think that’s too hyperbolic in 2018, right? You Rene came out of Intel and we’re going to, we’re going to attack this particular market but not with an x86 device. We’re going to attack it with the Arm architecture.
Take us back there briefly and then bring us up to speed where you guys are today.
Jeff: All right. Perfect. Yeah, you’re right. You know, especially in its [02:00] CPU time scale, you know, six years is nothing. So, you’re right. We did kind of burst onto the scene six years ago. We’ve done a lot since then. So, kind of to go back to you know, 2018, 2019 you know, what are our real vision was, you know, when we looked out across the data center landscape, Okay.
We saw a couple of key challenges that were emerging. One was power. It wasn’t as obvious six, seven years ago that this was going to be a massive constraint. You know, to put it in perspective, for about 15 years there, the total power consumption in the US never went up. We had many, many years of very consistent power consumption.
And so there was always spare power on the grid. And that was because we had created all kinds of efficient ways to reduce the amount of power, even while new usages were coming online. Well, a couple of things were starting to change. One, vehicles were electrifying. That’s a really good thing, right?
Takes things away from using fossil fuels in our vehicles, but it also does mean that There’s an increased demand on the, on the [03:00] grid. And the second thing is that there were factors that were making it clear that data center power consumption could start to increase, even though it hadn’t increased for a long time.
The things that were happening there were, you know, workloads were starting to change. Workloads were becoming more and more compute intensive. This analytics workloads got more sophisticated. As the AI Workloads of the time, which were more computer vision and recommender model focus, even those workloads were getting more sophisticated.
Obviously, since then, we’ve seen, you know, gen AI And LLMs Really just ratchet that up by another order of magnitude. So, workloads were getting more compute intensive. We’d already kind of used up the easy power efficiency gains. No data centers are already very, very power efficient. The most efficient data centers in the world.
Maybe you only have five or 10 percent waste and that’s, you know, that’s a change that maybe used to be 50 percent waste, 60 percent waste a decade ago. So great job making data centers more power efficient, but it also means there’s only 10 percent [04:00] more to go and attack. There are almost no efficiencies left there.
So, we did a great job there. And then also just the x86 CPUs at the time, they weren’t increasing in efficiency. They were increasing modestly in performance gen over gen, but oftentimes that performance gain came with the exact same gain in power. So, performance per watt just wasn’t increasing. What that meant was that Each generation at the rack level, even though each CPU had more performance, the performance, the rack level wasn’t going up.
It just meant that every single year when you put the new generation of, I’ll just say Intel CPUs in there, it meant that you had less CPUs in the rack, but the same exact Performance that you had the year before consuming the same amount of power. So, it wasn’t helping anybody create a more dense or higher performance data.
And so, we looked out across the landscape, and we said, there’s a clear opportunity here. The scale of compute continues to increase. The [05:00] cloud is obviously the model of computing that everybody’s migrated to. Whether it’s on prem or public cloud or hybrid, it doesn’t really matter. It’s a cloud-based model, which is big compute at scale.
And there, and there were clear problems that were going to make it difficult to keep scaling the cloud if the industry didn’t have other solutions. So. That’s where Ampere came about is we decided there had to be a better way to deliver high performance compute and not deliver high performance compute that was also very, very high power.
There was a way to go in and re architect things from the very most base level. And build something that was high performance, but also very low power and power efficient, and that’s where we came in. And so, we started with a very different approach than the x86 vendors. You know, we started with arm-based CPUs and really innovating all the way down to the architecture and micro architectural level to build something that [06:00] was really well suited for the cloud environment of today.
Versus the legacy environments that those x86 CPUs were built for when they were first built 10, 20, 30 years ago. So that was, that was really the impetus. And we’ve done a lot in the meantime.
Brian: You mentioned a couple of the prevailing workloads then when the company was launched around vision, generative AI literally exploded on the scene two years ago, has that had an impact on.
How you think about building solutions or is your CPU forward approach just naturally fit into the evolving demands of gen AI?
Jeff: Yeah, I think there’s two elements of it. I think at a strategic level, it doesn’t change a whole lot because. Gen AI really just made those strategic imperatives more important.
The idea that you need more and more performance, but you can’t [07:00] consume more power to do so. You’re going to run into some real constraints. So, it, it fits into the exact same space. And obviously building a general-purpose CPU means that you’re building for a lot of work. Now, five, six years ago, the workloads that I cared the most about were it was web servers, it was databases, it was video encoding.
It was AI inference, but it was AI inference in terms of. Computer vision or natural language processing. It was recommender models, non-transformer-based AI models, but that was a big focus for us back then. What’s changed is that the balance of workloads has changed a little bit. And so, a higher percentage of the workloads are now AI inference than, than before.
And the nature of those AI inference workloads has changed a bit, you know, with the transformer-based approach. And the LLM that came from that, what it means is the models are much larger than they used to be. And some of the compute elements that are best [08:00] that are best utilized for them look a little bit different than, than what the compute elements look like for the models of the past.
Now, the good thing is that’s what a general-purpose CPU is really good at. It’s really good at being really versatile. So as workloads change, it does a pretty good job at virtually any workload. But since we knew that AI inference had the possibility to be One of the predominant workloads in the future, but we did do specific things to ensure that AI inference as a workload ran really well on our processor.
Things like ensuring that all the numerical formats that people care about are, are natively supported. So, things like bfloat16 or int8. We did a lot of work on just ensuring that the performance of the hardware itself could be exposed really easily to the end user. About three years ago, we acquired a company on Spectre that was building software acceleration libraries [09:00] for AI inference.
And the result of that was that it became very, very easy for people to run AI inference on our CPU and to harness the SIMD units and the other micro architecture elements of our processor that are really good at AI inference to really get the maximum possible performance out of those. Without the end user needing to actually do anything to optimize their code or optimize their model or worry about what the framework, all that stuff just happens under the hood.
I would say that we, we foresaw some elements of this. I can’t say that I could have predicted exactly where we were going to end up today with these LLMs, but you could see what direction things were going. And so, you could start to build things into the CPU and then build the ecosystem around it so that it was as easy to run AI inference as any other workload on these processors.
Now, that being said, as we look at the very, very largest models, you know, as we get out into the multi hundred billion parameter models, the trillion parameter [10:00] models that’ll, that’ll soon come, you know, at that point, that does require some additional processing elements within the, within the SOC. And, and that’s why we, we announced a couple of months ago, our Ampere One Aurora line, which still uses the goodness of our really efficient general-purpose cores.
But then adds in some acceleration elements, not in the cores themselves, because you don’t necessarily want to burden the cores with these types of compute elements, but tightly coupled to the cores across the mesh within the SOC. So, you get that low latency, but you also have a lot of flexibility with how you scale those compute elements.
And you don’t have to make a lot of tradeoffs between whether you want to optimize for general purpose or AI inference at any given moment, given that our customers don’t know that. And they will always have to be able to balance between the two of them. So, I think that the details have changed, the tactics change over time as the workloads are changed, but the strategy isn’t different.
The strategy is still really high performance at really low [11:00] power across a flexible set of workloads, like always ran in the cloud. Those workloads just change, and that means that the compute changes with it over time. And if you can build a really flexible platform that’s able to easily change over time, integrate things into it, that Become, become general purpose over time, then you, then you have a big advantage.
Brian: I have so many forward-looking questions for you, but we’ll get to those right now at this time slice. This is a perfect question to ask of the product guru. What are the, what are the challenges around implementations that you’re hearing from customers and developers out there and how is Ampere addressing these?
Jeff: Yeah, I, I think that you know, there’s a couple of elements to that. One is that a lot of these solutions are very, very complex today to implement. The solutions are look at the platform level, look a bit different at times than the solutions at the platform level that existed [12:00] a couple of years ago.
There’s a lot more elements. System level optimization is a lot more important than, than maybe it was a couple of years ago because of the amount of data traffic, the movement between different elements within the server. One thing that we’re doing there is we, about a year ago we created the AI Platform Alliance.
The idea here was there’s a lot of people out there that are building really, really cool elements that can go into an overall AI solution. We’re building a CPU, there’s people that are building accelerators that are really good at, you know, maybe it’s different types of models, different size models, different deployment models so wide variety of accelerators out there.
Also, there’s a wide variety of ISVs that are, that are building their own frameworks that maybe sit on top of some of the, the other AI pieces to make it easy for enterprises to deploy. At a higher level, you have S. I. S. O. E. M. S. That are building a wide variety of systems that may look very different than the systems that [13:00] people were deploying a couple of years ago.
So easy to deploy solutions. Like for instance, we built a solution with Netting has their video processing units. So, you think of a use case like Whisper, so using Whisper for, say, doing transcoding or translation, and Transcription of live video, for instance, like a, like a newscast, you want to close caption it.
Maybe you want to close caption it in 30 different languages. Well, Whisper does a great job of that. And so, when you take an overall solution that has [14:00] NetInts, VPUs and Argental purpose CPUs and, and then you build it out at a platform level with someone like SuperMicro. Now you have almost an appliance that somebody could deploy.
So, if you’re a broadcaster, you have a box that does exactly what you want. That’s going to be able to, say, process hundreds of video streams simultaneously and translate and transcript them real time. And you don’t have to worry about piecing this together yourself. And so that, that’s one of the things that that we’re helping to address is just the complexity of the solutions is very high.
Because we are in the early stages still of, of this AI cycle. It may seem like we’re far into it. It depends on kind of, I guess, where people have been in their place in the industry over the last, you know, five or 10, 15 years, but we’re still really early in this, in this AI side. And so, we’re dealing with a lot of nascent technologies and solutions that are just coming together, you know, real time.
So that’s the, you know, the AI platform Alliance helps address that, that issue. The other issue is, is just, again, it [15:00] goes back to the power issue. There’s, there’s suddenly potentially extra more increase in compute demand. We didn’t suddenly get more power in these edge locations. You’re still constrained to maybe a hundred watts.
Your big data center is still constrained to whatever the power capacity got off the grid. In five or ten years, that can change. Obviously, we see people going in and looking to, you know, spin up nuclear reactors again and, and things like that. But that doesn’t happen overnight. That’s not a next week. Type of thing we still have a big build out that that’s going to occur before all that stuff comes online.
And while we’re doing all that and bringing that online, you know, the compute dance and keep going up and going up and going up. So, we need to do everything we can. We need more power sources. We need cleaner power sources. We need more efficient data centers. We need more efficient processors. We need more efficient solutions.
And we all that stuff to come together. And I think that you know, along with the power challenge, the thermal challenge is, is difficult to, you know, there’s a lot of cool, innovative technologies out [16:00] there direct liquid cooling, immersive cooling, some are more practical than others, and, and they can be really efficient.
But if you have a data center that you just built a couple of years ago, and you’re, when you built it, you were planning for a 10, 15, 20 year, you know, life cycle. You weren’t planning on going in and retrofitting the whole thing in a couple of years. Your kind of, you have what you have, you have the investment that you made.
These data centers live over a very long lifetime and it’s not trivial to go in and gut them or completely overhaul the architecture of them or redesign the way the racks are laid out or create new power and cooling delivery systems. Those don’t happen overnight. And it’s important that We deliver solutions that can be deployable today, everywhere at scale.
And not just solutions that eventually will be able to be deployed everywhere at scale in five or 10 years. So those are some of the problems that we’re helping, you know, people to, to solve today is just make things simpler, help them fit into [17:00] their existing environments today, because that not everything can change overnight and build and build a path to the future with these customers.
And, and I guess as, as enterprises are coming online and, you know, running more and more AI Workloads and more and more of those AI Workloads are very, very critical to them, and they want to have some control over where and how they run them. You know, it just magnifies the problem a little bit more because this isn’t just a couple of people.
Big hyperscalers running AI workloads. This is hundreds, thousands, tens of thousands of enterprises where AI is now a critical workload, and they have to figure out how to adopt this at scale as well.
Brian: So, the crown jewels at this point are Ampere One and Ampere Aurora. Give us a compare contrast and somewhere in my research, and now I can’t find it in my questions, you’re enabling fanless designs.
Jeff: We are. Yeah. So, if, when you look at it, you know, [18:00] Ampere One is our flagship product today, general-purpose compute. Today, the CPUs have 192 cores, we’ll be releasing a 256 core CPU soon. And that’s, that’s really what we see at the workhorse within the data center space, but also out into the, out into the edge.
So really efficient processors can run any workload you throw at it. And that solves the bulk of the, the data center. And you mentioned the fanless designs. Again, the type of efficiency that we’re delivering in the data centers equally applies out to the edge. You know, different scale. Maybe instead of a 500-watt server, now we’re talking about a 50- or 100-watt device that’s sitting out at the edge somewhere.
And some of those devices do need to be fanless. You know, a, a good example of one of these is we have a deployment in space today. Fans don’t work in space. Space is a vacuum. So, a fan does no good in space. There’s no airflow in space. And so those are devices that have to be passively cooled because you don’t have any way to actively move air or, you know, a liquid’s not feasible in that environment [19:00] either.
And so, yeah, we’re, we’re enabling really efficient solutions out there where you can still get a couple dozen cores, 40 Watts. Which doesn’t require any active cooling at all. That’s Ampere one is, you know, wide range of core counts, wide range of use cases, wide range of, of power consumed, but always efficient and matched to the environment that that people are looking to deploy it in.
And then Ampere one Aurora is the next step. In our, our product line, it’s taking the flexibility that we built into Ampere one where we moved to, you know, we have a chiplet approach. We designed our own disaggregation architecture, we designed the die-to-die interfaces and architecture, but we designed them in a way to be very, very flexible.
But which seamlessly mesh with the general-purpose CPU cores. Taking that a step further it could be any work. It doesn’t need to be just AI. Now, obviously that’s the, that’s kind of the killer workload of the moment, and that’s where it makes the most sense to go in and apply those, those resources and build a specific type of accelerator for, but you can build this acceleration in for any type of workload, and this doesn’t need to be Ampere.
Developed IP or Ampere developed chiplets either, this can be third party IP that has been developed by another company. Maybe it’s something very unique to their workload or their environment, or maybe it’s something that’s incredibly sensitive for them where they don’t want this IP out [21:00] there and deployed by other people.
So, so Ampere One Aurora kind of takes that flexible framework and it now really takes. General purpose compute to another level because we’ve created another way to create that general-purpose aspect while also having acceleration. So, you kind of don’t have to in a way you don’t have to choose anymore between whether you want something that is general purpose versus something that is domain specific, you can actually have.
Any of that within a, within a flexible set of solutions that can go into the same types of platforms. And so, it’s really, I think it’s that evolution of compute where maybe general-purpose compute has now turned into AI compute and AI compute is that broader set of workloads. Some of which are very specific and some of which are very general.
Brian: about power at the edge, but because you sit in such a unique position, you have great [22:00] visibility. About what’s going on at the edge, there’s a movement to move more AI compute to the edge, keep it there for reasons of lower latency for privacy and security. How do you see in the next couple of years that evolving?
There are obviously some workloads I’m thinking video files are better suited being computed on in the cloud. But how do you see edge workloads evolving?
Jeff: Yeah, I definitely see I definitely see a big movement to the edge and it is it’s these latency sensitive workloads that tend to drive you know, it’s why going back 15, 20 years, you know, it’s why you started to see CDNs built up for caching some of these videos out at the out at the edge because you wanted low latency and you also wanted to minimize some of the data as well.
So, things that are [23:00] lots of data and big. Yeah. and need to be low latency, those are really well suited to be sitting out at the edge. Now, what limits it sometimes is when the edge doesn’t have enough performance, then you make compromises and you move things further away because the source of the large processing is, you know, is somewhere else.
And again, this same type of thing happened over the last, the last 20 years or so. And these workloads find their natural place. I think we, we had [24:00] there, there were big arguments 20 years ago, 15 years ago about the cloud and where things are going to sit between public cloud and private cloud and edge and how we’re going to create a taxonomy around this and, and what it was all going to look like.
And at the end of the day, the, the workload needs and the economics of it kind of end up settling it. And there will be a large amount of workloads that if we enable the right technologies, we’ll sit at the edge and we’ll want to be close to the user. And I guess that’s, that’s our role is to make sure that we you know, we make that feasible and economical.
And so, I, I, I really just, AI is just perfect example of a workload. That, that wants to sit at the edge wherever possible, has all the characteristics, needs to be low latency, lots and lots of data. Privacy can be an issue. There can be locality issues as well. I mean, it’s, there will be places where you’ll want to run different models, depending on what country you’re in or [25:00] what geo you’re in, right?
There could be language dependent. It could be policy dependent. And so, there’ll be a lot of reasons why inferencing is going to scale. Everywhere it’s going to be the biggest scale out workload that we’ve ever seen. And that’s one thing where I think as we think about AI, I think this competition has been good and how we talk about inference.
What gets lost a lot of times in the bigger picture is all AI gets thrown in together and training and inference get kind of thrown together. And the key here is that. Inference and training look incredibly different. So, a lot of things I’m saying here about AI inference, maybe aren’t true for AI training.
But, but we have to look at them, even though. From a workflow perspective, you train, then you infer from a compute perspective, the requirements are very different. And from a deployment perspective, the deployment requirements are very, very specific and they’re going to want different solutions.
Brian: Speaking of the Eds, Jeff, Ampere has a very interesting use case with Spacetech, which is a Chinese technology company that’s part of a larger real [26:00] estate enterprise. Now, we’ve covered this in a separate podcast and a case study with Spacetech CEO Sean Ding. And I encourage listeners to go listen to that story because it’s amazing.
Jeff, tell us about it from your perspective.
Jeff: We’ve been working across a pretty wide range of use cases for a while now. Everything from a cloud out to edge. And the kind of connect the tissue is that there’s a lot of analytics and AI that occurs in all those places. And so having a really high-performance solution that’s also really power efficient matters across that whole spectrum, but for different reasons.
And the case of space tech, you know, they provide a property services for a really large amount of real estate in China. And the challenge that they were facing is that there’s just more and more smart devices that are collecting data. That’s just generating a ton of traffic, but it also opens up a lot of service opportunities.
Things like facial and vehicle identification, [27:00] security services, managing certain assets on the properties like the Lifts. So, with all that dynamic data coming in, and all these opportunities to use that data in a smarter way, they needed a really high-performance edge AI server. To provide services, you know, along those lines.
And so that, that’s sort of where this originated was that they were using an existing x86 solution, but they needed something that was higher performance, given the change in data traffic and the change in opportunities, the growth and opportunities but they needed something that was also really power efficient at the same time.
And that’s where Ampere came into the picture.
Brian: Can you share any of the, the data that has been captured so far as, as these guys have implemented Ampere solutions?
Jeff: Yeah, yeah, I definitely can. You know, when you look at the workload, there’s obviously a lot of elements of the workload, but probably the two most critical pieces of it are video decode, because that’s often the data source that they’re using, [28:00] and then the other key element of performance is, is AI inference to actually generate the results that you’re going to take some action on and their previous solution was an x86 based solution.
And it had a discrete GPU in it. And so that’s their, that’s their baseline. And so, the key was to deliver much more performance for video decode and for AI inference than that solution but to still do so in a really power efficient way. When you look at those two elements, the video decode.
Our processor in this solution was able to decode 126p 25 frame per second video streams in parallel. And so that far exceeded what they were capable of doing with the existing x86 solution. Where it kind of comes full circle then is then taking that data and then running the, the inference model.
So, running double digit number of inference models in parallel. The solution that we provided is 2. 6 times faster [29:00] than the x86 solution. So, it delivered, yeah, a pretty big gain in, in performance. So that kind of made it a, that made it a no brainer to go ahead and, and utilize this, this solution, you know, a step function increase.
And what they were able to provide in terms of services to the properties.
Brian: In terms of that. x86 migration in this case, are you seeing that in other customer engagements as well in other applications?
Jeff: We are, we are, yeah, I, I think that you know, when you look at the market, there’s some small portion of the market where it’s Arm native applications in code, maybe it’s Android applications, maybe it’s something in the automotive space where for many, many years, the code has always been Arm based and that, that’s a no brainer to run that stuff on a on an Arm based processor like it.
Like Ampere processors, but there’s a really large part of the market that has traditionally been six code. And the, the issues [30:00] that that market’s facing is that running that x86 code on x86 CPUs, isn’t giving them the same gains that they are used to expecting over the last 10, 15, 20 years, obviously the Intel processors aren’t as competitive on a gen-on-gen basis as they once were.
And all of those processors are, are seeing large increases in power. So, it’s starting to make it difficult to utilize x86 processors for a lot of power efficient use cases. So, what I’m really seeing now is it’s not that there’s an arm market and there’s an x86 market. There’s a market that needs high performance, power efficient processors.
And that market is now moving their code from being x86 based to being arm based. And the good thing about this is that the places where that’s most common, the workloads where it’s most common, you know, these are a lot of cloud native workloads. These are a lot of workloads that use open-source code, and so it’s people running it’s people running Pytorch, it’s people running [31:00] MongoDB.
It’s people running NGINX. All of that code has already been ported over to, to Arm. It has been for a long time. And so, it’s not a lot of extra work for somebody to utilize the Arm base code versus the x86 code. It’s just a one-time switch. And so increasingly I’m seeing a lot of those traditional cloud and edge workloads moving off of x86.
over to Arm and it’s running Arm code natively on Arm based processors. And so, it’s really a seamless experience for the for the end user. Let’s come back to the
Brian: space tech case study for a second. So, this is an edge use case. And when you talk about modern workloads, AI workloads, it’s usually a GPU conversation.
But you guys are obviously CPU forward. You need GPU power and performance and presence in the data [32:00] center, especially around training. But as you move out to the edge, talk to us about that landscape. Yeah. GPU versus CPU.
Jeff: I would even cut it one step further. You know, I think it’s, it’s general-purpose CPU on one end.
It’s GPU, maybe on the other end, but just, just through a traditional lens. And then there’s domain specific accelerators that probably sit somewhere in the middle. And now today, certainly AI training and data centers is the domain of GPUs. There’s a decade plus of legacy there. These are. Big workloads that run for days, weeks, months, and big clusters.
And so, they look a little bit more like the GPGPU workloads of the past in supercomputing. But training workloads are things that can happen in one place. They don’t need to happen in a thousand different places around the world. Because you train the model once, and then you deploy the model. When you look at deploying that model or running these workloads, now it’s a totally different problem state.
It’s not running one workload for a really long period of time in [33:00] one place. It’s now running one workload that might run for, you know, nanoseconds to milliseconds. So, it needs really low latency. And then running that workload millions or billions of times in a short period of time. And those workloads have to sit really close to the end users that are getting the results because latency is really, really important.
So now the GPU solution that sits in that big hyperscale data center training the model, that doesn’t work in those places. It doesn’t work for a number of reasons. It doesn’t work because it could be trickier to deploy. It’s just a more complicated system. And that’s not what you need out of the edge.
You need something that’s much more compact and simple. It also tends to have a lot of thermal constraints, so it can be hard to cool some of those GPUs. They tend to consume more power and also, they can be significantly more expensive. When you’re looking for a big scale out solution that you can deploy anywhere.
And so, what somebody is looking for is really power efficient solution that’s flexible, that can handle a lot of different workloads coming [34:00] at it. It’s not just running one workload for a long period of time. Keep in mind, these edge servers or edge devices aren’t also just running AI imprints all day long.
You know, they’re also running all the other workloads that that need to be provided. In conjunction with instance, maybe a bunch of databases, a web server. There’s a bunch of caching pieces in there. I mentioned the video decode as well. So there needs to be a very flexible solution. And this is where CPUs are really good at this.
CPUs are general purpose. They can run any workload, which means that whatever you throw at it, they’re going to be able to handle. And so even when you don’t know what the demand is from second to second, a CPU does a really good job of handling. CPUs also are good in this use case because it’s a lot of data traffic.
And minimizing the number of hops that that data is making and keeping it in the, in the CPU can be the lowest latency way to deliver these results. So, you know, while a GPU can get the job done, it tends to do so a little less efficiently, more expensively and a little less [35:00] flexibly in these types of, of environments.
Now, I did mention there’s a piece in the middle where there is a need at times for different domain specific accelerators, but that doesn’t necessarily mean a GPU. That may mean a piece of AI acceleration hardware that’s really, really good at large LLMs, or it could be a piece of acceleration hardware that’s really good at computer vision, or like we see in NPUs and other types of DPU devices.
It could be that is really good at network. And so, I, I think that what we really see is you want really general-purpose performance matched up with a very, very domain specific acceleration. And that mix and match approach is a better solution. When
Brian: you sat down and started to hash out how this would look with the SpaceX folks, how much were they interested in the power efficiency story that, that you bring and how much were they interested in the, the code base, the.
Ecosystem [36:00] around which they could create their solution.
Jeff: Yeah. I mean, I think if you look at the requirements they had, I mean, the number one requirement was performance with the number one constraint. Being power. And so, you know, really their goal was delivering multiple X more performance, but still sit in the same power envelope.
So that was the number one consideration. And then I think similar with many other people, yeah, I think that the code base. And the overall ecosystem, you know, that’s table stakes. And so, once we were able to prove that this code ran just as easily on these devices with only minor changes, by moving to an Arm based code base and using the right SDKs then that became an easy part of the story.
But at the end of the day, you know, the reason why we’re seeing this big migration away from x86 is because there’s a need for more and more performance. environments that are constrained and x86 just doesn’t get the job done. And when I say constrained [37:00] environments, the obvious one is an edge where you clearly only have a limited amount of power, but frankly, hyperscale data centers are constrained today too.
It’s just a different order of magnitude.
Brian: We’re bumping up on time. We talked about workloads at the edge. We talked about workloads everywhere. Last question for you. What is your personal favorite AI application today?
Jeff: Man, my personal favorite. Well, actually I have young kids, so I would say that the two things they’re most entertained by right now.
So, one is one is using chat GPT to create stories. So, they, they love that one. They love feeding in goofy stuff and seeing what ridiculous, you know, stuff comes back out again. So that, that’s one that my, my kids really like the other one is stable diffusion. So, with some of the image generation models, I like this one for two reasons.
Also, one, it’s never been easier to create, you know, what is essentially you know, stock images. I mean, anything you come up with for a blog or [38:00] something else within seconds, you’ve got an image that kind of matches what you were thinking. But this is another one coming, you know, coming back to my kids.
They also love. Sending in crazy prompts and seeing what crazy image comes back again. I love the, yeah, I think it’s the creativity and the way that the LMS of today are able to kind of take the, the craziest things you come up with and then, you know, turn them into, whether it’s writing images, video instantly in ways that we never could have before.
It kind of always had to sit in your imagination and, and now they can kind of bring some of that stuff out and help you share it with other people.
Brian: Amazing times. We’re fortunate to live in them, aren’t we? It is. Well, Jeff Wittich, thank you so much for your time. Awesome conversation. And we look forward to having you on again in the future.
All right. Perfect. Well, thanks, Brian. I really enjoyed [39:00] it.
1);}}@media screen and (prefers-reduced-motion: reduce){.css-wjxay9-SquareButton{transition-property:background-color;transition-duration:0ms;transition-timing-function:cubic-bezier(0
1);}}@media screen and (prefers-reduced-motion: reduce){.css-152q15n-SquareButton{transition-property:background-color;transition-duration:0ms;transition-timing-function:cubic-bezier(0
1);}}.css-x340e0-SquareButton{margin:0;padding:0;display:inline-grid;-webkit-column-gap:8px;column-gap:8px;grid-template-columns:repeat(3
1);}}@media screen and (prefers-reduced-motion: reduce){.css-x340e0-SquareButton{transition-property:background-color;transition-duration:0ms;transition-timing-function:cubic-bezier(0
1);}}.css-v6oqvx{margin:0;font-family:Lato,Helvetica,Arial,sans-serif;font-size:12px;line-height:20px;font-weight:700;letter-spacing:0px;font-style:normal;text-transform:none;font-stretch:normal;padding:0.5px 0px;}.css-v6oqvx::before{content:'';margin-bottom:-0.4973em;display:block;}.css-v6oqvx::after{content:'';margin-top:-0.4463em;display:block;}Listen
announced late Wednesday it is acquiring semiconductor designer Ampere Computing in a $6.5 billion deal that will expand its investments in AI infrastructure
SoftBank Group Corp. and Arm Holdings plc may acquire chipmaker Ampere Computing LLC, Reuters reported on Wednesday
It’s unclear what price the company might fetch. Two years ago, SoftBank reportedly considered buying a stake in Ampere at an $8 billion valuation. It earlier expressed interest in acquiring the chipmaker outright
SoftBank became a major chip industry player in 2016 when it paid $32 billion to buy Arm
It took the chip designer public in 2023 but retains a majority stake
U.K.-based Arm sells popular processor designs that underpin most handsets
connected devices and a growing number of data center systems
Ampere develops a line of server processors based on Arm’s instruction set architecture
which enables applications to move data to and from random-access memory faster
Last July, Ampere previewed an upcoming processor called Aurora
and includes a dedicated artificial intelligence module
a type of high-speed RAM that is widely used in AI chips
Reports that Ampere is exploring a sale first emerged in September
Bloomberg reported that the company had hired a financial adviser to help it weigh its options
Today’s report cautioned that the acquisition talks with SoftBank and Arm may not lead to a deal
it’s possible Ampere could accept a takeover offer from a different bidder
When reports of the potential sale emerged in September
Bloomberg’s sources said the chipmaker could also opt to remain independent
The report that SoftBank may buy the company comes less than a year after it acquired another chip startup. In July, the company inked a deal to purchase Graphcore Ltd.
The chipmaker had raised more than $760 million in funding prior to the acquisition
Graphcore offers an AI processor called the Bow IP that is based on wafer-on-wafer technology
It comprises two vertically stacked layers: one that contains logic circuits and another equipped with capacitors
components designed to hold an electric charge
The capacitors deliver this electricity to the logic circuits in order to boost their performance
AI servers include not only machine learning accelerators but also a central processing unit that coordinates those accelerators
SoftBank could enter the market with a server that combines Graphcore’s Bow IPUs and Ampere CPUs
At one point, the conglomerate considered partnering with Intel Corp
SoftBank reportedly also hopes to provide software for those servers and play a role in supplying the power they use to run AI models
Riverbed rolls out new AI-powered observability features
IBM revamps the venerable mainframe to run generative AI workloads and agents
Court rules UK Home Office’s legal battle with Apple over data must be conducted in public
AI Mode in Google Search gets new visual search capabilities
SignalFire secures $1B to expand early-stage AI-focused investments
OpenAI could reportedly acquire Jony Ive’s AI device startup for $500M+
AI - BY MARIA DEUTSCHER
POLICY - BY JAMES FARRELL
AI - BY MIKE WHEATLEY
AI - BY DUNCAN RILEY
Forgot Password?
The Japanese investment company SoftBank Group (SBG) is taking over Ampere Computing. Ampere is a chip designer that makes server-grade silicon based on Arm architecture
SoftBank is paying $6.5 billion for this acquisition. Ampere’s most important investors, Oracle and Carlyle, are selling their shares, according to The Register
The Ultra model it launched in 2023 is an example of this
all compatible with the Armv8.6+ architecture and version 5 of the Server Base System Architecture – but also customized with changes that Ampere itself has implemented
This enormous chip is designed to process AI workloads
And that seems to be exactly what interests SoftBank
The Japanese company reports that the acquisition of Ampere furthers its strategic vision and commitment to innovation in AI and computing
SBG CEO and chairman Masayoshi Son (photo) is quoted in Ampere’s announcement about the deal
He believes that the future of Artificial Super Intelligence requires groundbreaking computing capacity
Ampere’s expertise in semiconductors and high-performance computing accelerates AI innovation
The SoftBank announcement also mentions that Ampere will work together with other SBG companies
This could mean that those companies will adopt Ampere processors
such as the Korean/Japanese web giant LY Corp
SoftBank also runs a telecom company in Japan
its Vision Fund investment arm has shares in TikTok operator ByteDance
And in countless large-scale e-commerce companies
If SBG can persuade companies to choose Ampere’s products
It is unknown what the acquisition means for Arm’s ambitions to create its own server processors
Perhaps SoftBank will find a way to run two server processor companies
Or it will focus Arm on direct sales of customization and let Ampere sell the commodity equipment
Metrics details
The synergistic Cu0-Cu+ sites is regarded as the active species towards NH3 synthesis from the nitrate electrochemical reduction reaction (NO3-RR) process
the mechanistic understanding and the roles of Cu0 and Cu+ remain exclusive
The big obstacle is that it is challenging to effectively regulate the interfacial motifs of Cu0-Cu+ sites
we describe the tunable construction of Cu0-Cu+ interfacial structure by modulating the size-effect of Cu2O nanocube electrocatalysts to NO3-RR performance
We elucidate the formation mechanism of Cu0-Cu+ motifs by correlating the macroscopic particle size with the microscopic coordinated structure properties
and identify the synergistic effect of Cu0-Cu+ motifs on NO3-RR
Based on the rational design of Cu0-Cu+ interfacial electrocatalyst
we develop an efficient paired-electrolysis system to simultaneously achieve the efficient production of NH3 and 2,5-furandicarboxylic acid at an industrially relevant current densities (2 A cm−2)
while maintaining high Faradaic efficiencies
and long-term operational stability in a 100 cm2 electrolyzers
indicating promising practical applications
it is crucial to explore and develop a highly efficient electrocatalyst to compress the HER process and enable the desired conversion of NO3− into high-valued NH3
while the electronic interaction between support and active Cu sites would bring to inevitable interference to identify the individual roles of interfacial Cu0–Cu+ sites
with the goal of enhancing the NO3-RR performance of Cu-based electrocatalysts
it is of great necessary to understand the interface behavior and reaction mechanism by controllable Cu+–Cu0 sites
we developed a feasible and efficient strategy to construct tunable Cu+–Cu0 interfacial motifs by modulating the size effects of Cu2O nanocube catalysts
We revealed the design principle of Cu+–Cu0 pairs by correlating the macroscopic particle size with the microscopic localized coordinated structure properties by in situ electrochemical Raman and X-ray absorption near edge structure (XANES) characterization
Based on the controllable construction of the Cu+–Cu0 interfacial structure
we elucidated the individual and synergic roles of Cu+ and Cu0 sites during the NO3−RR process by combining electrochemical measurement and DFT calculations
we developed an efficient paired-electrolysis system to simultaneously achieve the efficient production of NH3 and 2,5-furandicarboxylic acid (FDCA) by coupling the cathodic NO3−RR process and anodic HMF electrochemical oxidation reaction (HMFOR) process at an industrially relevant current density
while maintaining high Faradaic efficiencies (FENH3 75.6%
yield rates of NH3 (5.20 mmol h−1 cm−2) and FDCA (0.47 mmol h−1 cm−2)
and long-term operational stability (20 h) in a 100 cm2 anion exchange membrane (AEM) electrolyzers
The techno-economic analysis demonstrates the potential of this system
d Statistical size distribution of nanoparticles of S-Cu2O
Source data for d–f are provided as a Source Data file
The sequence of signal intensity is S-Cu2O > M-Cu2O > L-Cu2O
indicating the higher content of oxygen vacancies on the small-size Cu2O nanocubes
According to the principle of surface chemistry
the nanosized material not only shows a higher specific surface area
but also exposes higher content of unsaturated coordination sites
and the concentration of unsaturated coordination sites is negatively related to the crystal size of materials
it is rational speculate that the negative correlation of the concentration of defect sites and the particles size of Cu2O nanocubes
the as-obtained Cu2O catalysts with Cu/Cu2O interface structure is denoted as S-Cu/Cu2O
a In situ Raman spectra collected during the electrochemical reduction process on various Cu2O samples
b The relationship between the intensity of characterized Raman signals and the time of electrochemical reduction
c AES spectra of Cu LMM over various Cu2O nanocube catalysts after the electrochemical reduction process
Inset is the magnification of Cu K-edge spectra
e Fourier transforms of Cu K-edge EXAFS spectra with optimal fitting results for various Cu2O nanocube catalysts after electrochemical reduction process
f Structural coherence changes in EXAFS coordination number of Cu–Cu bonds and Cu–O–Cu bonds
g Schematic diagram of structural transformation of Cu2O with different sizes
Source data for a–f are provided as a Source Data file
indicating that the formation of metallic-Cu-dominated Cu/Cu2O interface structure after the electroreduction process
owing to low defect concentration on large-sized Cu2O
we found that the Cu2O is slightly reduced to metallic Cu species on L-Cu/Cu2O sample to form Cu+-dominated Cu/Cu2O interface structure
by rational tuning the particle size of Cu2O precursors
the Cu/Cu2O interface structure with a flexible ratio of Cu0 and Cu+ could be effectively constructed
a LSV curves of various Cu/Cu2O catalysts in the 1 M KOH with 0.1 M KNO3 solution with the scan rate of 5 mV s−1
Faradaic efficiency (b) and yield rate (c) of NH3 over various Cu/Cu2O catalysts at different applied potentials
(n = 3) with the error bars representing the s.d
d Comparisons of NH3 yield rates and Faraday efficiency between the M-Cu/Cu2O and typically reported NH3-synthesis electrocatalysts
e Faradaic efficiency of NH3 production and NO3- reduction rates in the 1 M KOH with various concentrations of KNO3 solution over M-Cu/Cu2O catalysts at −0.2 V vs
f The Faradaic efficiency and yield rate of NH3 over M-Cu/Cu2O catalysts during the stability measurement
the NH3 Faradaic efficiency (FEs) of M2-Cu/Cu2O (87.6%) is slightly lower than M-Cu/Cu2O (95%) and higher then L-Cu/Cu2O (86%)
indicating that size-effect strategy is convincing
and the products were determined by 1H NMR spectroscopy
The results show that the typical two peaks of 14NH4+ and the typical three peaks of 15NH4+ in 1H NMR
which confirms that the synthesized NH3 comes from NO3− in solution rather than environmental contamination
a In situ FTIR spectra of M-Cu/Cu2O catalyst during the nitrate reduction at various electrolytic times
b The evaluation of nitrogen selectivity and conversion rate over various Cu/Cu2O catalysts at −0.2 V vs
d ESR spectra of DMPO adducts over various Cu/Cu2O nanocube catalysts in the absence and presence of nitrate
the signal is collected after 15 min electrolysis at -0.2 V vs
e The transformation of NO3− over various Cu/Cu2O nanocube catalysts with the presence of 10 mM TBA solution
f LSV curves of M-Cu/Cu2O catalyst in the absence and presence of SCN− solution
g Gibbs free energy diagrams of the conversion of nitrate to ammonia over various Cu/Cu2O catalysts; h the kinetic energy barriers of potential limiting steps over the various Cu/Cu2O catalysts with CI-NEB method
Source data for a–h are provided as a Source Data file
suggesting that the NO2− electrochemical reduction step is sluggish on the metallic-Cu-dominated Cu/Cu2O interface structure
which leads to the abundant NO2− accumulation during the NO3−RR process
the rapid NO2− electrochemical hydrogenation is observed on the Cu+-dominated Cu/Cu2O interfacial catalysts
indicating that the Cu2O is the active species for the NO2− → NH3 process
indicating that the atomic *H-mediated indirect reduction pathway is dominated
M-Cu/Cu2O and L-Cu/Cu2O catalysts showed a rapid NO3− RR kinetics suggesting that Cu2O species would provide sufficient atomic hydrogen for NO3RR process
play a vital role during the NO3−RR process
which is consistent with the results of DFT calculation
the OH- species may also be competitive adsorbed on the Cu sites
we found that the direct reduction of adsorbed *OH species also requires endothermic 0.24 eV
demonstrating that M-Cu/Cu2O exhibits the high selectivity for NO3RR
Further, we also explore the kinetic energy barrier for the PLS of the different Cu/Cu2O materials. As shown in Fig. 4h
the kinetic energy barrier of the PLS is 0.51 eV on the M-Cu/Cu2O
is favorable in kinetics for NO3 electrochemical reduction process
b Scheme and digital photo of NO3−RR//HMFOR paired-electrolysis system
c The LSV curves of overall water splitting systems and NO3-RR//HMFOR Paired-electrolysis system
Potential-dependent Faradaic efficiency and yield rate of NH3 (d) and FDCA (e) in AEM electrolyzer
f Stability measurement of NO3−RR//HMFOR Paired-electrolysis system at the current density of 2 A cm−2
g Techno-economic analysis of NO3−RR//HMFOR paired-electrolysis system
Source data for c–g are provided as a Source Data file
we developed a promising strategy to achieve the controllable construction of Cu0/Cu+ interfacial structure by modulating the size effect of Cu2O nanocube electrocatalysts
Combining the in situ electrochemical Raman and X-ray absorption near edge structure (XANES) characterization
the design principle of Cu0–Cu+ pairs has been revealed by correlating the macroscopic particle size with the microscopic localized coordinated structure properties
Based on the controllable construction of the Cu0–Cu+ interfacial structure
we elucidated the synergic roles of Cu+ and Cu0 sites during the NO3−RR process by combining electrochemical measurement and DFT calculations
The results reveal that Cu0 is the main active site in the Cu0–Cu+ motifs
provides atomic *H species to accelerate potential limiting step of the hydrogenation of NH2 to NH3
Based on the design of Cu+–Cu0 interfacial electrocatalyst
we developed an efficient NO3−RR//HMFOR paired electrolysis system to simultaneously achieve the efficient production of NH3 and FDCA at an industrially relevant current density
while maintaining high Faradaic efficiencies (75.6% for NH3
and long-term operational stability (20 h) in AEM electrolyzers with an area of 100 cm2
This work provides a strategy to design energy-effective and eco-friendly electrocatalysts for large-scale industrial electrolytic synthesis of high-value-added products
5-hydroxymethylfurfural (98%) were purchased from Energy Chemical
5-hydroxymethyl-2-furancarboxylic acid (98%)
and 2,5-furandicarboxylic acid (98%) were obtained from Aladdin
99%) were used without further purification
Deionized water (UPR series super pure water purification system
resistivity >18.2 MΩ cm) was used to prepare all solutions
a mixture of 10.0 mL of 0.009 M ascorbic acid solution and 16.0 mL of 0.113 M NaOH solution was added into 16.0 mL of 0.005 M copper acetate solution
The mixed solution was stirred vigorously for 30 min
enabling the solution became yellow and turbid
indicating the formation of Cu2O nanocubes
the precipitates were collected and washed through centrifugation
All the experiments were carried out at room temperature (298 K)
The Cu/Cu2O nanocubes were synthesized from Cu2O nanocubes by a constant potentiostatic reduction process at the potential of −0.6 V vs RHE for 300 s in 1 M KOH and 0.1 M NO3− solution
This process was implemented in a three-electrode system
and Hg/HgO electrode was used as the working electrode
After the electrochemical reduction process
the obtained Cu/Cu2O nanocubes were washed with absolute ethanol and blown dry by nitrogen
XRD patterns were collected via an X-ray diffractometer (Shimadzu XRD-7000) using monochromatic Cu Kα radiation (λ = 1.5406 Å) to investigate the phase structure transformation
Scanning electron microscopy (SEM; JSM-7610F) and transmission electron microscopy (TEM; JEOL F200 equipped with energy dispersive spectrometer (EDS)) were employed to observe the microstructure and elemental distribution of the catalysts
The chemical composition and surface valence of the catalysts were analyzed through X-ray photoelectron spectroscopy with an Al-Kα X-ray source (E = 1486.6 eV
and the binding energy was corrected with a C 1s spectral of 284.8 eV
Raman spectroscopy was collected on a laser Raman spectrometer (Horiba LabSpec6
The ultraviolet-visible (UV-Vis) absorbance spectra were measured on a spectrophotometer (Beijing Purkinje General T6 new century)
the cathode and anode were separated by a anion exchange membrane (Fumasep FAA-3-20)
which is directly used for electrochemical measurement without pre-treatment
200 mg) were prepared by dispersing catalyst powder in a mixture of isopropanol (10 mL)
the catalyst inks were dropped on the carbon felt and dried at room temperature (298 K)
The anodic electrolyte of 0.1 M HMF in 2.0 M KOH was prepared before pumping into the anode chamber with a flow rate of 20.0 mL min−1
and the cathodic electrolyte of 0.1 M KNO3 in 2.0 M KOH was prepared before pumping into the cathode chamber with a flow rate of 20.0 mL min−1
The electrolyte pumping out from the anode and cathode were collected
and the product was analyzed using HPLC and UV–Vis
The UV–Vis spectrophotometer was used to quantified the concentration of nitrate
the electrolyte is neutralized and diluted to suitable concentrations
2 mL of Nessler’s reagent followed by 1 M HCl were added to the above solution
After standing at room temperature (298 K) for 20 min
record the absorbance at a wavelength of 420 nm with a UV–Vis spectrophotometer
a mixture of p-aminobenzenesulfonamide (4 g)
N-(1-Naphthyl) ethylenediamine dihydrochloride (0.2 g)
A certain amount of electrolyte was extracted from the cathodic chamber and diluted to suitable concentrations
0.1 mL of color reagent was added to the above solution to further react for 20 min
The absorbance was recorded at a wavelength of 540 nm
Nessler’s reagent was used as the color reagent to determine the quantification of ammonia
a certain amount of electrolyte was extracted from the cathodic chamber and diluted to suitable concentrations
0.1 mL of potassium sodium tartrate solution (ρ = 500 g L−1) and 0.1 mL Nessler’s reagent were subsequently added into the above solution to further react for 20 min
The absorbance was recorded at a wavelength of 420 nm
The conversion of NO3− was calculated using:
The selectivity of NH3 and NO2− calculated using:
The Faradaic efficiency was calculated using:
Where \({{{{\rm{C}}}}}_{{{{\rm{NH}}}}_{3}}\) and \({{{\rm{C}}}}_{{{{{\rm{NO}}}}}_{{2}}^{-}}\) are the concentration of \({{{{\rm{NH}}}}}_{3}\) and \({{{{\rm{NO}}}}}_{{2}^{-}}\)
\(\Delta {{{\rm{C}}}}_{{{{{\rm{NO}}}}}_{{3}}^{-}}\) is the concentration difference of \({{{{\rm{NO}}}}}_{{3}^{-}}\) before and after electrolysis
C0 is the initial concentration of \({{{{\rm{NO}}}}}_{{3}^{-}}\)
F is the Faradaic constant (96,485 C mol−1)
Q is the total charge passing the electrode
the chemical reaction considered can be summarized with the reaction equations below
the reaction free energy can be obtained with the equation below:
Beyond fossil fuel–driven nitrogen transformations
Development and recent progress on ammonia synthesis catalysts for Haber–Bosch process
Enhancement of lithium-mediated ammonia synthesis by addition of oxygen
Continuous-flow electrosynthesis of ammonia by nitrogen reduction and hydrogen oxidation
Selective electrocatalytic conversion of nitric oxide to high value‐added chemicals
Direct electrochemical ammonia synthesis from nitric oxide
Elucidating the intrinsic activity and selectivity of Cu for nitrate electroreduction
Cu/Cu+ synergetic effect in Cu2O/Cu/CF electrocatalysts for efficient nitrate reduction to ammonia
In-situ reconstructed Cu/Cu2O heterogeneous nanorods with oxygen vacancies for enhanced electrocatalytic nitrate reduction to ammonia
Identifying the active site of Cu/Cu2O for electrocatalytic nitrate reduction reaction to ammonia
Achieving efficient and stable electrochemical nitrate removal by in-situ reconstruction of Cu2O/Cu electroactive nanocatalysts on Cu foam
Electrocatalytic nitrate and nitrite reduction toward ammonia using Cu2O nanocubes: active species and reaction mechanisms
Controllable Cu0-Cu+ sites for electrocatalytic reduction of carbon dioxide
Room temperature additive-free synthesis of uniform Cu2O nanocubes with tunable size from 20 nm to 500 nm and photocatalytic property
Modulating surface oxygen species via facet engineering for efficient conversion of nitrate to ammonia
Tailoring competitive adsorption sites by oxygen-vacancy on cobalt oxides to enhance the electrooxidation of biomass
Correlation of intrinsic point defects and the Raman modes of cuprous oxide
Self-supported copper oxide electrocatalyst for water oxidation at low overpotential and confirmation of its robustness by Cu K-edge X-ray absorption spectroscopy
Unveiling the synergistic effect of multi-valence Cu species to promote formaldehyde oxidation for anodic hydrogen production
Electrochemical synthesis of formamide by C–N coupling with amine and CO2 with a high faradaic efficiency of 37.5%
Heterostructured Co‐doped‐Cu2O/Cu synergistically promotes water dissociation for improved electrochemical nitrate reduction to ammonia
Electrosynthesis of amino acids from NO and α-keto acids using two decoupled flow reactors
Nitrate reduction pathways on Cu single crystal surfaces: Effect of oxide and Cl−
Ultralow overpotential nitrate reduction to ammonia via a three-step relay mechanism
In situ reconstruction of metal oxide cathodes for ammonium generation from high-strength nitrate wastewater: Elucidating the role of the substrate in the performance of Co3O4-x
Electrocatalytic nitrate/nitrite reduction to ammonia synthesis using metal nanocatalysts and bio-inspired metalloenzymes
Electrocatalytic coupling of nitrate and formaldehyde for hexamethylenetetramine synthesis via C–N bond construction and ring formation
Tuning the selective adsorption site of biomass on Co3O4 by Ir single atoms for electrosynthesis
Metallic Co nanoarray catalyzes selective NH3 production from electrochemical nitrate reduction at current densities exceeding 2 A cm−2
Enhanced nitrate-to-ammonia activity on copper–nickel alloys via tuning of intermediate adsorption
Paired electrolysis for inorganic small molecules reduction coupled with alternative oxidation reactions
Three-dimensional Ni3Se4 flowers integrated with ultrathin carbon layer with strong electronic interactions for boosting oxygen reduction/evolution reactions
Ab initio molecular dynamics for liquid metals
Unveiling the bonding nature of C3 intermediates in the CO2 reduction reaction through the oxygen-deficient Cu2O (110) surface─ A DFT study
Semiempirical GGA‐type density functional constructed with a long‐range dispersion correction
Origin of the overpotential for oxygen reduction at a fuel-cell cathode
A climbing image nudged elastic band method for finding saddle points and minimum energy paths
Download references
This work was supported by National Natural Science Foundation of China (Grant Nos
22162025; 21925203; 22332002; 22402110; 22222304; 22403048)
the Youth Innovation Team of Shaanxi Universities
the Program for Young Scholar Talents of Wenying in Shanxi University
the Program of Technology Innovation of Shanxi Province (2024L007)
the Open and Innovation Fund of Hubei Three Gorges Laboratory (SK232001)
the Regional Innovation Capability Leading Program of Shaanxi (2022QFY07-03
2022QFY07-06) and the Shaanxi Province Training Program of Innovation and Entrepreneurship for Undergraduates (S202210719108)
the Foundation of State Key Laboratory of Coal Conversion (grant number J24-25-909)
the Natural Science Research Foundation of Shanxi Province (202303021211016)
These authors contributed equally: Yuxuan Lu
Engineering Research Center of Ministry of Education for Fine Chemicals
School of Chemistry and Chemical Engineering
Shanxi Key Laboratory of the Green Catalytic Synthesis of Coal-based High Value Chemicals
Shaanxi Key Laboratory of Chemical Reaction Engineering
College of Chemistry & Chemical Engineering
Jiangsu Co-Innovation Centre of Efficient Processing and Utilization of Forest Resources
National Synchrotron Radiation Research Center
Shanxi Research Institute of Huairou Laboratory
executed the experiments and collected the data
carried out the EXAFS characterization measurements
Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work
Download citation
DOI: https://doi.org/10.1038/s41467-025-57097-x
Thank you for downloading this report! You can view it at the link below:View PDFIf you can not access the report, please contact ariana.lynn@thefastmode.com
Get updates and alertsdelivered to your inbox
SoftBank is reportedly close to snapping up Ampere Computing for $6.5 billion
SoftBank, the Japanese owner of Arm
a company that develops data center chips based on Arm's architecture
while SoftBank would require approval from both Oracle and Carlyle Group before a deal could move forward
However, a new Bloomberg report suggests that the deal is nearing completion and could be announced in the coming weeks
Sources familiar with the matter estimate its value to be around $6.5 billion
SoftBank valued Ampere at over $8 billion during a proposed minority investment
expand its footprint in the competitive data center and AI markets
the move does raise some interesting strategic questions
has been working to expand into server and AI processing
Ampere is one of the few firms independently developing Arm-based server chips
it could impact Arm’s role as a neutral supplier of intellectual property
Integrating Ampere into SoftBank’s ecosystem may also shift competitive dynamics in the industry
Market conditions add a further level of complexity to the benefits buying Ampere will bring to SoftBank
The semiconductor sector faces rising competition and concerns over slowing AI-related investments
Sign up to the TechRadar Pro newsletter to get all the top news
features and guidance your business needs to succeed
which unveiled an inexpensively produced AI model last month
has raised the specter that technology providers won’t be generating as much revenue as predicted.”
In that time he wrote for most of the UK’s PC magazines
you will then be prompted to enter your display name
Ampere's chip design team could be used on the $500 billion Project Stargate
and one that could see Arm shift from simply licensing chip designs to manufacturing its own silicon
The move would put it in direct competition with its existing customers but also expand Arm’s footprint in the growing and highly lucrative data center space
The deal is set to conclude in the latter half of 2025
California-based Ampere will continue operating under its current structure until then
The exact reasons for the acquisition aren’t known outside of Ampere and SoftBank
but there are plenty of theories flying around
is a major investor in Ampere) to secure American leadership in AI and boost the US tech sector
How would Ampere’s acquisition fit into Stargate
“probably somewhere around 1,500 of the nearly 2,000 people at Ampere Computing are chip designers and these people
could be tapped by OpenAI to help design custom CPUs and GPUs for the Stargate effort.”
has not put together a chip development team of any appreciable size
OpenAI has not created a compute engine and shepherded it through development,” Prickett Morgan concludes
SoftBank. “Acquisition of Ampere Computing Holdings LLC.”
Ampere, the group’s intelligent electric vehicle pure player
has an uncompromising vision: federate the best partners
and reach price parity with thermal vehicles
At the heart of these challenges: the battery
Until then, the NMC (Nickel-Manganese-Cobalt) battery had dominated. Dense, solid chemistry, but whose cost rises with the volatility of metals. Ampere expands the spectrum by incorporating Lithium-Iron-Phosphate (LFP). This option
is perfectly suited for urban or peri-urban segments
No compromise on quality: customers enjoy a tailor-made autonomy
This technological shift requires working hand in hand with the best in their field
Working closely with LG Energy Solution and CATL
Ampere is building an integrated European value chain
ensuring cell supply and competitiveness of the LFP technology until 2030
Several Renault and Alpine models will benefit from this
allow us to offer customers the best range at the best price
It is essential to Ampere’s mission of democratizing electric vehicles in Europe.”
In parallel, Ampere led the development of a «Cell-to-Pack» architecture with LG Energy Solution. The world’s first «pouch»1 battery
this innovation concentrates more cells in an identical volume
and a key step forward in making electric vehicles more accessible
The integration of LFP and “Cell-to-Pack” technologies - and soon “Cell-to-Chassis” - will reduce the cost of batteries on vehicles by about 20% as early as 2026
A big step towards the democratization of electric vehicles
a leap forward in the next evolution: that of a cobalt-free battery
1 The "pouch" cells of lithium-ion batteries have a flattened and flexible shape
giving the possibility to create custom configurations
adapted to the needs of the electric vehicle
“Our battery strategy builds on Renault Group’s ten years of experience and investments in the electric mobility value chain
Our new partnerships will significantly strengthen our position
This is a major step to increase our competitiveness
anchor our Group in the French industrial dynamic and achieve our carbon neutrality objective
The Group reaffirms its commitment to producing popular
affordable and cost-effective electric cars.”
processes are becoming more agile and carbon-efficient
relies on waste heat recovery from neighbouring industries and drastically reduces the process footprint
The AESC gigafactory a few meters from the Douai factory paves the way for the production of a low carbon battery
The goal: to achieve net carbon neutrality in Ampere ElectriCity factories by 2025
The Group also aims - with The Future Is Neutral - to become a European leader in closed-loop battery recycling, where nothing gets lost, thanks to the combined know-how of experienced partners like Indra.
By mastering the value chain, diversifying chemistries and anchoring its industrial ambition in the heart of Europe, Ampere is rethinking battery as a strategic lever for efficient, accessible and responsible electric mobility.
Cloud company currently owns 29 percent of chipmaker
Oracle's investment in Ampere Computing could give it the option for future ownership
First reported by Bloomberg and The Business Times
Oracle currently owns 29 percent of the chipmaker
and can exercise "future investments" that would lead to a controlling ownership in the company
has invested significant amounts in the company in the last couple of years
Oracle invested $600 million in the fiscal year ending on May 31 in convertible debt
That debt financing will mature in January 2026
and should Oracle exercise options through January 2027 to acquire additional equity
it would then have a controlling ownership of the company
Oracle revealed this on September 25 during a regulatory filing which also noted that Ampere's founder and CEO Renee James will not run for reelection as director
Earlier this month, reports emerged that Ampere was exploring a potential sale
Bloomberg reported at the time that Ampere had been working with a financial advisor for several months exploring a sale
As well as working with Microsoft and Google
Ampere designs chips for the likes of Tencent and TikTok owner ByteDance
Ampere currently estimates that 95 percent of Oracle's services use Ampere chips, and the companies recently teamed up with Uber to enable Uber to use custom Ampere chips on OCI
Oracle has said that it has reduced purchases of Ampere chips
The company placed a $104.1m prepayment order in 2023 for Ampere processors
as well as $4.7m directly and $43.2m indirectly
2024 saw Oracle by $3m directly and none in the market
it has around $101.1m remaining under the prepayment
Data Centre Dynamics Ltd (DCD), 32-38 Saffron Hill, London, EC1N 8FH Email. [email protected]DCD is a subsidiary of InfraXmedia