Will the 1 trillion US dollar investment in generative artificial intelligence p

The wave of investment is pouring into generative artificial intelligence. Is it worth it?

"This is the focus of all the debates at the moment," said Cao Cheng from Goldman Sachs Asset Management. Investments are flowing into various fields, from silicon that supports the training of artificial intelligence models to power companies that supply electricity to data centers covering several acres.

To understand the direction of the industry's development, the portfolio managers of the basic stock team of Goldman Sachs Asset Management, Cao and Dane, met with executives from 20 leading technology companies driving artificial intelligence innovation. Conversations with companies ranging from semiconductor manufacturers to software giants, both public and private, indicate that some companies have already reaped benefits from artificial intelligence, and some companies would buy more artificial intelligence hardware if they could access it.

Advertisement

"Our confidence continues to grow, believing that this technology cycle is real," Dane said. "As they say, it's going to be huge."

But there are risks. Cao and Dane said that a few companies building large language models may find that they are competing in a winner-takes-all market. The use cases or killer applications that can fully justify a large amount of investment have not yet appeared. They also pointed out that in a year when the U.S. stock index has continuously hit historical highs, the rebound of technology stocks has never been a straight line. "You will receive both investment digestion and the wave of hype reality," Dane said. "They both play a role in the range of years."

Cao and Dane's insights come from a recent report on the examination of huge investment in artificial intelligence published by the Goldman Sachs Research Department, which includes interviews with MIT Institute Professor Daron Acemoglu and Goldman Sachs Global Stock Research Director Jim Covello. The report, titled "Artificial Intelligence Generation: Too Much Spending, Too Little Profit?", points out that large technology companies, enterprises, and utility companies will spend about 1 trillion yuan in capital expenditure in the next few years to support artificial intelligence.

We discussed the prospects of investment returns in the industry with Cao and Dane from Goldman Sachs Asset Management, whether this trend is mainly playing in the public market or the private market, and companies that want to take advantage of the artificial intelligence boom.

How much do you think the investment in these models is? Do you think it is realistic to see investment returns that justify this capital in the short term?Cao Cheng: This is the focal point of all current debates.

Brooke Dain: The biggest issue in the market right now is: Are we getting a return on investment? I feel quite reassured that we are seeing this return. There are several data points that I am looking at that make me feel reassured.

Firstly: During this trip, we spent a lot of time talking with the Chief Financial Officer of a hyperscale company who has just come back from strategic planning, and they are doing a one-year, three-year, five-year forward-looking outlook. This person was very open about how they calculate the ROI in clusters where they deploy GPUs and how they find it very value-added from a return perspective.

Now, this company has been running large-scale inference workloads on its infrastructure (using already trained AI models to make inferences or predictions) for recommendation engines. As predicted by these models, they have seen an increase in the time spent on the platform and the results of the next content.

So, for them, the calculation of ROI might be the simplest calculation because you can deploy a cluster, you can do a more complex algorithm, and then it can lead to more time spent, which can lead to more advertising surfaces, which can drive revenue.

The second thing, which is by paying long-term attention to the industry, and also recently having many discussions with another hyperscale company around their capital expenditure plans: We know how disciplined they have historically been, and how they see incremental revenue coming back, and the incremental returns they get from capital expenditure. This CFO emphasized that they have money, and if they could deploy more GPUs, they would do so.

Having known this person for 20 years, understanding how they handle capital budgets, how they use capital—this person would not do it if they could not see real, tangible, and real returns. They are very principled.

But it's still too early, and another drawback is that for these cutting-edge models, you can't fall off the front of the wave. You can't be the fourth cutting-edge model without spending $1 billion to make your model better. So for these people, there is a bit of an arms race here, which contains a bit of a leap of faith.

Cao Cheng, what do you think about the RoI issue?

Cao Cheng: This is one of the most important issues. At the very least, it will determine the direction of the market in the next 6 to 12 months, and whether technology stocks continue to outperform the broader market.Clearly, for any ROI issue, you must understand the scale of investment made so far. If you look at NVIDIA's revenue for the 2022 calendar year, their revenue was $26 billion. Their revenue in the most recent quarter reached $26 billion. So, essentially, NVIDIA's revenue doubled in two years. If you compare NVIDIA's expenditure with the total cloud capital expenditure, it is found that nearly 50% of the expenditure has gone to NVIDIA chips.

Therefore, the investment in artificial intelligence is huge. If you consider the return on investment, the starting point revolves around "me," and this "me" is very, very high.

If you are a bull, the most important thing here, and a point mentioned by Brook, is that there is now a race to see who can build the best foundational models (general models applicable to many applications). This race will not slow down anytime soon. From the perspective of return on investment, if you look at the next one or two years, the return on investment may not be very good. However, if you have a 20-year return stream associated with building the best technology stack today, then you can certainly justify the investment.

On the other hand, NVIDIA believes that in the next ten years, they will improve the efficiency of processing artificial intelligence by a million times. On the same chip infrastructure, it is a million times. And when you talk to them, you will understand that the infrastructure being built for training now is also the same infrastructure we will use for inference. Therefore, as the world shifts from training to inference, it will be interchangeable. It is not that you have to build a completely new infrastructure for inference.

We look around and say, okay, there are some cool applications. But there is no killer application that will immediately consume a large amount of capacity.

So, we understand why there is so much investment in artificial intelligence. Obviously, there may be a pause in the short term, which will determine the short-term direction of the market. But I think, from any medium to long-term perspective, we are confident that artificial intelligence is still one of the biggest trends we have seen in our history. So I think it really just depends on your time frame. But we are deeply aware of both sides of the argument.

So, where will the market and focus go?

Brook Dane: In terms of Cao Cheng's point, it is a very important point that no technology cycle rises in a linear way. They just don't. You will get these waves of investment digestion and hype reality. They play out over a multi-year horizon.

Our current view is that we are deploying all this infrastructure to run these things. We see incredible improvements in the performance and functionality of these models. However, as Cao Cheng mentioned, we really need to see applications that use this technology in a more profound way than coding and customer service chatbots in the next year to a year and a half.

If this ultimately only does coding and customer service, we are way overspent in this area. Similarly, I think both of us are very confident that in the medium term, we will see these applications and use cases. It will profoundly change the way we all work.But I believe the entire market is trying to figure out what else needs to be done to develop new applications and use cases, and what you will see. So, we are currently in that period where we need to see progress in this area.

It sounds like you're saying this is a winner-takes-all market. Is this similar to the development of the internet, as we've seen with search or email platforms, where there will be a dominant player?

Brooke Dane: This is another big topic.

Cao Cheng: We know there won't be more than four. No one else can compete. The only companies capable of investing at this level are Meta, Google, OpenAI, and Anthropic.

Brooke Dane: But what we don't know yet is: as these models mature, when you no longer see these step function increases, which will happen at some point, will the best model three years from now be so much better than everyone else's models that it will dominate 80% of the market share? Or will we have four very good models that people will use for different use cases, in different domains, in different ways? Will we have four large-scale models?

If there are four equally powerful models, you would think they would be quickly commoditized. However, if one of them becomes a clear, dominant leader, then it will have incredible economic benefits. We don't know the answer to this yet.

But what we do know is that none of these four can afford to fall behind in the pace of innovation. Because if you do, if you stay at a first-year university intellectual level while others become doctoral-level intelligence, and if they are much better, you may struggle to have a market, and they will descend the efficiency and cost curve faster than you.

Cao Cheng: I think what will happen over time is that there will be experts in vertical fields. So, I think, beyond the race for raw speed and intelligence is: how can we build more effective models for specific sub-industries and use cases?

Colleagues from Goldman Sachs Research say that artificial intelligence benefits large technology companies. I heard the same thing from you today. Do you think there are any challenges to this statement?

Cao Cheng: From an infrastructure perspective, the race is essentially over.However, in terms of building vertical and industry-specific LLMs and models, as well as many edge use cases, I believe this has not been resolved yet, and I think this is where a lot of innovation is going to emerge.

Brooke Daine: I would like to add that I believe this market is not just a winner of a few super large-cap stocks. Essentially, the most important thing besides model training is: What unique data do you have, and how can you leverage this data to help your clients?

Therefore, at the software layer, we are really looking for companies with deep proprietary data that they can use to create differentiated use cases and experiences.

But from the perspective of investment opportunities, this is largely a phenomenon of the public market. There will not be a large number of private companies emerging to disrupt the structure of these industries.

So to summarize: What is the biggest takeaway you get from this research?

Cao Cheng: My main takeaway is more content for semiconductors. Obviously, Nvidia has been dominating the AI chip field for most of the past two years. But now there are real alternatives that will start to enter the market. I think there is a real debate about whether Nvidia will continue to maintain a 100% share, or whether they will start to give up some of their share to others. We are beginning to believe that in the next few years, there will be other beneficiaries in addition to NVIDIA.

Brooke Daine: My main takeaway is that we are at the early stage of a very profound and far-reaching technological transformation, and I am confident that the state of these models and how they will develop forward will drive the structural changes we have been thinking about and hoping for.

Our confidence in this technology cycle is continuously strengthening. As they say, it will be a great opportunity.

Comments