Premium Domain Names for Sale at CrocoDom.com
Morningstar analysts discuss AI’s impact on Big Tech—and on their fair value estimates and economic moat ratings.
Key Takeaways
Susan Dziubinski: Good morning. My name is Susan Dziubinski, and I’m an investment specialist with Morningstar.com. I’m joined today by two senior equity analysts from Morningstar Research Services to talk about the short- and long-term impact of AI on Big Tech.
Now, AI isn’t new. After all, we’ve been asking Siri questions on our phones since 2011. But AI is top of mind for today’s investor. For proof, just look at Nvidia’s stunning stock price gains this year.
Now, some of the ground we’ll cover this morning includes the investments that Big Tech companies are making in AI and the potential impact that this will have on their firms, how tech firms are integrating AI into their products and what that may mean for revenue, and why it may be premature to view AI as the next big growth opportunity for the largest tech companies.
I’d like to introduce you to two of my colleagues who analyze tech companies and stocks for a living here at Morningstar. Ali Mogharabi is a senior equity analyst with Morningstar Research Services who focuses on internet and software companies. Dan Romanoff is also a senior equity analyst with Morningstar Research Services, and Dan’s focus is software. So, let’s get started.
Good morning, Dan and Ali. Great to see you this morning.
Ali Mogharabi: Good morning.
Dan Romanoff: Good morning.
Dziubinski: Now, why has AI become such a hot topic during the past several months? Given that it’s really not new, what’s driving this interest?
Romanoff: I would say that late last year, OpenAI introduced ChatGPT to the world in sort of a general testing release. So, everyone got to play with it starting in October or November. And then, starting this year in January, Microsoft made a substantial investment into OpenAI that grabbed a lot of headlines. And then, a few months later in March, Microsoft introduced the new ChatGPT-powered Bing search with the new Edge browser. And that was a pretty big event, and that garnered a lot of headlines. So, there’s this building momentum based on that. That’s kind of the way I see it.
Mogharabi: I agree. I mean, of course, all of these companies that Dan mentioned had been investing in AI for a few years. Google is an example. Basically, they acquired DeepMind in 2014. They created that neural network architecture called Transformer, where a lot of the LLMs are based on, including GPT’s. And it had created many of its own LLMs. But at the same time, what you probably noticed, what Dan was saying, is that Google was not necessarily involved in the latest attention-grabbing headlines. It’s been very slow in trying to commercialize and/or make AI widely available for consumers and enterprises. So, there is some catching up to do for Google.
Dziubinski: Got it. So, you two cover two of the companies that are in the news and that people are talking about when it comes to AI, and that’s Microsoft and Alphabet, which is, of course, the parent company of Google. So, let’s talk a little bit about where each company stands when it comes to AI, and let’s start with Alphabet, Ali.
Mogharabi: Sure. On the top line, it could affect the search side of the business. The network moat source will be fine, and I think it will remain the global leader on the search side. The question is, though, how can it actually monetize Bard, which is more generative AI-based than its current search offering, and would growth on that front cannibalize current search offering revenue? There are different ways that Bard can be monetized. Subscription and advertising are a couple of those. And it could still drive traffic to Google Search. So, overall, I don’t think the impact will be much, assuming Bard is going to continue to improve. It is now based on its most powerful LLM, PaLM 2, and could be based on a combination of PaLM 2 and a more powerful and efficient one that these guys are working on.
I think Google’s GCP, or cloud business, is going to benefit more because more companies, whether enterprises or SMBs, small and medium-size businesses, they’re actually going to implement AI in their operations and/or products, which means they need more computing power, more data storage and security, more AI technology, which means demand for cloud providers like Google are actually going to increase. On the bottom line, likely a short-term impact, I think, on Search margin, given the cost associated with the training of the search engine with new LLMs. And the same is probably needed when it comes to advertising models, or ad models. But over time, as it scales, I think margins are probably going to expand again.
Dziubinski: OK. So, just to clarify, so then what impact at this point overall would you expect AI to have on Alphabet’s revenue and costs both shorter term and longer term?
Mogharabi: As I mentioned, not much of an impact on the top line on the revenue in the short term. In the long term, I think more of an impact on the revenue in terms of accelerating revenue growth on the cloud side, the GCP side. And then, on the bottom line or margins, short-term pressure on margins a little bit on the Search side of the business, but in the long run, that will recover again as it scales overall.
Dziubinski: Then, given Alphabet’s moves in AI so far, has that had any impact yet on your fair value estimate or economic moat rating for the stock? And would you expect that to change anytime soon? I think the fair value is $154 right now, making it about 20% undervalued in its wide moat.
Mogharabi: Yeah. I mean, even though the stock has gone up around 40% year-to-date, we think there’s more upside, and as you said, we still have that $154 fair value estimate. So, nothing has changed. But I must say, we continue to track the trend and growth of AI and its impact on the companies that I cover. But as of now, again, our fair value estimate is still $154, and the stock still remains attractive.
Dziubinski: Got it. Dan, let’s move over to Microsoft. You recently did a really in-depth report on Microsoft and AI in general. So, where does Microsoft stand when it comes to AI? In this report, you called Microsoft an “early leader.” Is Microsoft really the leader in your mind?
Romanoff: I would be reluctant to say that. I think you have Ali on here, and realistically, Google was doing AI for—they made the acquisition of DeepMind. So, they’ve been here for years already. Microsoft has also been here for years already with AI and machine-learning services on Azure, and only recently—so, this goes back to your first question—only recently do you see this stuff coming up in a meaningful way in terms of like investor interest and press interest, and that is largely based on that ChatGPT general release in the end of last year. So, that’s what generates the excitement. But realistically, Microsoft is a leader, and I think that language is correct. I think there are other companies who are doing pretty amazing technology sort of advancements here as well. Amazon, for example, is in this area. They have their own semiconductors. They have machine-learning and artificial intelligence services on AWS. They have their own large language model already in production. So, they’ve done a lot, too. So, Microsoft is a leader for sure. And I think that Ali could appreciate this. The companies that could power these models, you need intensive compute resources. So, the only way you really have that available to you is through some sort of hyperscaler. So, whether that is Microsoft Azure, Amazon AWS, or Google GCP, I think that you need that sort of resource at your disposal. So, therefore, Microsoft is a leader and not the leader, and I think that is a fair description.
Dziubinski: So, then, same question to you. What do you expect AI’s impact to be on Microsoft’s revenue and costs both over the shorter term and then longer term?
Romanoff: I agree with what Ali was saying. I think in the near term there’s not going to be much of an impact on either the top or bottom line. Realistically, in my recently published paper, I was doing analyzing by a bunch of different products and I said, you know, there’s no pricing available really on any of these things, So, it’s like guessing how much Microsoft might charge for something, guessing what the penetration might be, what the uptake might be. So, there’s a lot of educated guessing going on there, and the point of that was really to put guardrails around investor expectations. So, I didn’t change any of my estimates. I can see at least 50 basis points of revenue benefit over a period of years on an annual basis. That seems realistic to me. It’s helpful. Is it material? Maybe. From a fair value perspective, we haven’t changed anything yet. There’s just too many unknowns. There’s really only one product where GPT is available—that is the Microsoft GitHub Copilot, and that’s like basically a $20 upcharge on GitHub. So, you’re talking about hundreds of thousands of users potentially paying an extra $20 a month. So, we’ll see what the uptake on that is. It’s been rapid so far, but I don’t know if the penetration will be 100% or 20%.
Dziubinski: Got it. Then, just to reiterate, there hasn’t really been much of an impact yet from AI on your fair value estimate or moat rating. I think the fair value is $325 for Microsoft, wide moat, and it’s about fairly valued today.
Romanoff: Yeah, I think all that is true. In the paper, I said if you sum up all of these product estimated price points and stuff, you probably can get to around 5% to 10% uplift to a fair value estimate. I didn’t bake any of that into our model just because basically all of this is on the come. The pricing that I laid out for some of these products is a best guess based on available evidence. So, there’s just too many unknowns, really, to factor anything substantial into a model.
Dziubinski: Got it. So, let’s talk about a few other names on your respective coverage lists and their AI prospects. Dan, you mentioned Amazon. Let’s start there. Let’s talk about Amazon AI and what’s going on there.
Romanoff: As I indicated earlier, Amazon has been in this area for quite some time. Just like Microsoft, they have AWS artificial intelligence and machine-learning services. So, you can train your models. You can run your models. They’ve gone so far—actually, like Google has—they’ve gone so far to develop their own chips that can train these large language models. They’ve made their own chips that can run the inference back in the data center. So, once I’m using my Alexa speaker in my bedroom to play some music, I say the command, all of the processing happens on Inferentia chips back in an AWS data center. So, they’ve been at this for a long time also. So, like I said, I think that they’re a leader, and a lot of that is just based on the scale that they’ve developed so far.
Dziubinski: And then, what about Adobe? That’s also on your coverage list, right Dan?
Romanoff: I think you can extend this out to any of the larger software companies for sure. So, Adobe is one of them. So, Adobe has developed their own large language model. It is called Firefly. It is a very powerful tool for image generation. So, you can use natural language prompts, and it could generate images based on what you described. OpenAI has a pretty similar model called DALL·E, also very powerful. I’ve played with it. It’s actually a lot of fun. The Firefly model from Adobe, though, is amazing because it’s just baked into Photoshop. So, you can sit there and describe the picture you want, and Firefly will generate it instantly. And then, you could say, “Well, add more red to it” or “put a sunset” or “add some palm trees” or “put a person with a blue coat on.” And all this happens in real time on the fly. And then, you could say, “Well, move the person from the left side to the right side.” So, it’s very powerful. It’s amazing technology. They’ve demoed it. I think that that is going to be, I won’t say a game changer for Adobe, because, really, it’s just a natural evolution of their lead in that creative space, but it’s an important tool. I don’t think that they’re going to be the only ones out there. Like I said, I think OpenAI is already doing something similar, but that becomes an instance of, you know, you have to basically lease out OpenAI’s DALL·E to use that. And with Firefly, it’s just built into Adobe. And since the creative world is already using Photoshop and Illustrator and Creative Cloud to do their content creation, it’s just pretty much a natural evolution and fits very nicely into the portfolio.
Dziubinski: Ali, let’s turn to one of the big companies on your coverage list, and that’s Meta. So, what’s Meta doing with AI that’s notable? And really, will that move the needle for the business?
Mogharabi: Meta has created its own LLMs. And one that has actually received pretty positive reaction from developers is LLaMA, which many have said is actually easier to use and to be customized, mainly because it’s more open source and that’s actually because the entire model was leaked online earlier this year. So, developers don’t have to request APIs from the company as they do from the likes of OpenAI and Google. Now, internally, Meta is actually using AI to improve the effectiveness or ROIs on its ads on Facebook and Instagram that it sells to the advertisers. There also have been reports that it may introduce AI-powered chatbot in Instagram, where users can actually interact with it by asking questions, maybe ask to write messages, and so forth.
There’s also another company in my coverage—Snap did something like that recently, and it calls it My AI. And of course, AI plays a big role when it comes to Meta’s long-term metaverse strategy. So, it will continue to invest in AI for various reasons that I just mentioned.
Dziubinski: Let’s take a step back then and talk a little bit about what you or Morningstar in general expects tech companies in general, both those on your two coverage lists and those that are not, to proceed when it comes to incorporating AI into their businesses.
Romanoff: From the software side of things, there’s obviously a lot of hype and pretty much every one of my companies, when I talk to them, it’s like, that kind of takes up the lion’s share of the time. If you listen to the earnings calls, literally every company I cover spends a solid five minutes during the script talking about what they are doing with generative AI. So, it’s sort of top of mind for CEOs and top of mind for companies. But for the smaller ones, they’re basically using OpenAI or one of the peers to incorporate generative AI into whatever they’re offering. So, an obvious use case, for example, is Zoom in customer service, Five9 in the contact center. So, that is easy use cases, in my opinion, low-hanging fruit, and you’re seeing smaller companies just use somebody else’s model through the API that Ali was referencing. So, that’s pretty common.
The larger ones are, whether that’s Adobe or Salesforce or Amazon, they’re much more highly involved. There’s compute, there’s data centers, then maybe they’re developing their own chips, maybe they’ve developed their own large language models. There’s a lot going on there, too. And I think if you’re able to do the latter, that is develop your own models, that is going to be a point of differentiation for you. And it is kind of expensive to do that, so I think you’re going to be limited to the larger companies that have that advantage. But realistically, everyone is going to be able to access somebody’s model through an API. There’s not going to be a ton of differentiation, in my opinion, between whatever Zoom is going to offer and whatever Five9 or even Microsoft Teams and the AI that comes in there versus what Zoom is doing. I think it will be generally similar.
Mogharabi: Yeah. I mean, overall, I agree with Dan. Regarding some of the companies in my coverage, as I mentioned, mainly, of course, Google, Meta, and also Snap has already taken steps on the AI front. And it looks like the ad holding companies are now investing more in AI. For example, WPP, that’s the latest one, those guys announced partnership with Nvidia to actually create an AI-based platform that’s going to make creation or production of creative content more efficient. And by the way, they included tools provided by Adobe within that AI-based platform. So, various companies, if not nearly all the companies that I cover, are going to be involved in terms of developing and/or implementing AI technology.
Dziubinski: Given that AI seems to be touching almost every tech company, gaze into your crystal balls: What impact do you think AI might play in Morningstar’s economic moat ratings for tech companies over the long term?
Mogharabi: I’ll start really quickly. For me, when it comes to Google and Meta, I think their network effect moat source are going to remain intact. As expected, on the data side or intangible assets, I think those will strengthen. But of course, that could be affected by lawmakers here and in other regions, given that data privacy and security issues are still prioritized. But that’s pretty much a summary of what I see in terms of how AI may impact the economic moat sources.
Romanoff: Yeah, and I agree with all of that. And I think that the privacy issue is important because a lot of these companies, they are saying, “Well, we can take the OpenAI model and then we can fine tune it, which is just an extension of the training process. We can fine tune it using our data to make it a more relevant experience for our users.” So, if there’s data involved and there’s IP involved, yes, I think that perhaps strengthens the moat a little bit. From a software company’s perspective, I think ultimately, it’s a feature that is baked into whatever application you’re running, whether it’s Microsoft Office or Salesforce, CRM, or whatever. It makes it a more usable application and maybe it makes you do things in a more efficient manner. So, perhaps it strengthens the switching cost, but in reality, as I indicated, I think that AI is pervasive throughout everybody’s applications after not so long. So, there’s not going to be a huge differentiator.
Dziubinski: Now you touched on the privacy issue. Let’s talk a little bit about some of the risks to tech companies, again, both those you cover and those that might be outside of your coverage list, when it comes to AI, given that AI is really still in many ways in its infancy.
Mogharabi: I can begin, I guess. As I mentioned earlier, the biggest risk I think is on the regulatory front and how much patience lawmakers have for the adoption of AI. It’s going to raise questions about its impact on pretty much everywhere—on the economy, copyright issues, especially when you’re talking about the generative AI and search, and also, where some systems may actually be trained on bad data intentionally. So, these are a lot of the things that the lawmakers here and elsewhere are going to be focusing on. Again, in my opinion, the biggest risk is on the regulatory front.
Romanoff: Yeah, I think that is important, because right now, there’s sort of a scattershot approach that’s happening. Like, in the U.S., we’re a little bit slow to react to these things. I don’t think we’re doing much. In Italy, for example, they’ve just temporarily banned it. You can’t use OpenAI for six months in Italy. And so, there’s something going on everywhere in between, whether it’s like an outright ban consideration or just like a slowing of the development of this stuff. And you also see like groups of CEOs getting together and saying, Elon Musk, famously, “We’ve got to slow this, and we can’t be introducing it so fast.” I think he has ulterior motives, but whatever. So, I think that’s a risk. There’s definitely—I get this from my clients—they are asking about if you have software companies that are seat-based models and all of a sudden, you’re going to introduce generative AI into them. If you can use an AI instance as a customer service rep, for example, a virtual agent, doesn’t that reduce the need for an actual software seat for the physical rep? And some of those questions, I think, are kind of up in the air still. But realistically, I don’t believe a software company is going to introduce a product that cannibalizes their own sales in that regard. So, I think that is a lesser risk. I don’t know. So, those are some of the things that I think about.
But one thing Ali mentioned is the rights and the copyright stuff. I’ll just point out that Adobe’s Firefly model—they’ve already worked it out. They have a large database of stock images. Those are all incorporated into the model training. Those image creations are not based on others’ images. So, they’ve already worked out the rights management there. So, that is an important thing for an enterprise to consider when moving to a creative imagery type of AI model.
Dziubinski: Ali and Dan, are there any particular companies right now that you would suggest investors who’d like to play that AI theme consider investing in today?
Mogharabi: In terms of the companies under my coverage, I think Google or Alphabet stands out. It’s continued to invest in AI for years, has various LLMs that it’s been working on for years, and certainly has a lot of capital on hand to increase and/or accelerate its investments. It is prioritizing AI and incorporating it into search, its ad technology, cloud, including Workspace and its other products. And of course, it remains undervalued, in my opinion, with a fair value estimate of $154. So, again, to answer your question, I think among the companies under my coverage, Alphabet still remains the most attractive one.
Romanoff: And from my perspective, the two most obvious investment plays are either Microsoft or Amazon. And I think that is unfortunate for investors, because I get this question a lot. How do we invest in AI? And if you buy Microsoft, or with all respect to Ali, if you buy Google, you’re not really making an AI investment, right? There’s a lot of stuff in there, and by the way, AI is a small part of it. So, it’s not necessarily a 100% clean exposure to AI. But Microsoft through its OpenAI partnership and that investment, they are basically infusing the entire portfolio already, whether that’s Bing Search, or whether that’s gaming, they’ve already introduced it into Office 365 through a closed beta right now. So, it’s going to be everywhere in the portfolio, and I think there’s going to be an upcharge for it. And also, OpenAI runs exclusively on Azure. So, even if OpenAI is running or is leasing their API to whatever company, all of that inference is going to happen back in an Azure data center. So, Microsoft clearly is going to benefit from all of this. And Amazon is going to benefit similarly in that they’ve got AWS and machine learning. They’ve developed their own chips. They’ve developed their own large language models. Their product search has been powered by AI for years so this is just an evolution. But I think those are the two most obvious ways to play AI in my coverage.
Dziubinski: Then it sounds like from your perspective that we’re still in the very early innings of this AI arms race. So, given that, what one piece of advice would you give investors who may want to scratch that itch of trying to get in on the AI investing theme now?
Romanoff: Maybe Ali should start with this one.
Dziubinski: Too early? Or you can’t effectively do it or …?
Mogharabi: No. Again, I think some of the larger companies that have continuously invested in AI over many years remain leaders on that front and also actually provide the entire computation capabilities, all of that processing. And as Dan mentioned a couple of times, they have actually designed their own chips, created their own chips that actually enhances that computation and capabilities, makes it more efficient. I think those companies stand out. And among the companies under my coverage list, that one remains Alphabet.
Romanoff: Yeah. And just building on that, if you’ve built your own models and you’ve built your own chips and you have your own data centers, ultimately this AI movement, if nothing else, it consolidates power among the hyperscalers. So, including Ali’s companies, we talked a lot about Google, I think pretty obviously it’s going to include Google, Microsoft as well. There’s only a handful of them. And so, you see these companies basically strengthening their positions right now, and they sort of de facto become the best investments. So, if you’re looking for a pure play, I don’t really think that there is much you can do about it right now. If you just want to participate in the AI theme, then I think it’s pretty obvious that you buy one of these really strong, wide-moat companies that already have a strong position and have been in AI for years like Microsoft, like Amazon, like Google. So, I think that is what I would say to investors and not necessarily look for some like home-run stock.
Dziubinski: Let’s move on to some audience questions. We have a couple coming in. So, please keep submitting them. First question. What does Morningstar think of Nvidia today? Is Nvidia the best play for AI investors today? I know neither of you covers Nvidia, but can either of you comment?
Romanoff: I mean, I have some thoughts on it, so I’ll offer them up here. Maybe Ali has some different thoughts. So, just as a company, we recently raised our fair value pretty significantly, I think, from $200 per share to $300 per share. I’ve been covering tech for a long time besides software, and I would say that is one of the strongest quarters for a semiconductor company that I remember in 25 years of doing this. They’ve sort of repurposed their GPUs for AI learning. And there’s a lot of benefits to using a GPU to do that. So, they are a leader, and fortunately for them, there’s really only one other company that makes sort of discrete GPUs, and that is AMD through their acquisition of ATI years ago. So, they’re well positioned. And I’ll speak briefly for Ali here. I know they’re developing their own chips over at Google. I know Amazon is, too. So, it’s not like Nvidia is the only option. I know AWS, for example, they use three different internally developed chips, Graviton, Trainium and Inferentia. Two of those are specifically for AI training and AI inference, which is basically just using an AI model. And if you’re the user, you get to pick. When you go say, I want to be an AWS customer and I want to use Nvidia chips to do it or I want to use Trainium chips to do it, you get to pick. So, Nvidia is a great company. They’re well positioned. They have a good partnership with Microsoft. But I don’t really think that they’re the only competitor. You’re just not going to find a competitor, I don’t think, that is a separate stand-alone semiconductor company. That’s kind of what I think is going on there. Maybe Ali has some more to add, though.
Mogharabi: I mean, I agree with Dan on that front. And you’re right. I mean, Google actually has created its own chip, and it did that starting, I think, it was in 2016. And it’s been powering its GCP offerings. And similar to how Dan described AWS’ approach to this, Google does provide options for its clients, whether to use the TPUs or use Nvidia. But I do think that with these hyperscalers and the fact that, again, as Dan mentioned, that they have everything in-house, they have the data centers, they have the cloud structure, I think they may, over time, utilize their own chips a little bit more. But that certainly does not mean that Nvidia is not going to remain a leader on this front.
Romanoff: And I would just say, I think it would shock people—because everyone thinks that Nvidia is the leader in this—and I think it would be shocking for listeners to know that when you start doing the benchmark tests, a chip developed by Google, I know less about, but the ones developed by Amazon, they significantly outperform Nvidia chips depending on the workload. So, if you’re the expert at AWS and you’re working with your customer to steer them in the right direction, some use cases will be way more efficient to do on an AWS chip versus the Nvidia chips. And also, they’re priced accordingly. It’s actually more expensive to use the AWS chips in some cases, depending on exactly what you’re trying to do. So, it’s hard to say that Nvidia is just way better than everyone else. They make an excellent set of chips. And I think if used appropriately, you’re going to get a lot of bang for your buck there, but it’s certainly not the only option.
Mogharabi: Yeah, Dan, that’s a very good point. The same thing can be said regarding the Google chips. I think TPU4 and TPU5, there have been some benchmarks indicating that actually they have outperformed some of Nvidia’s chips. And as Dan mentioned, it certainly does depend on exactly the processes that need to be conducted. So, some outperform, some may not, and they may be priced accordingly.
Dziubinski: We have another viewer question, and I’m going to read this one directly. Google itself marked the introduction of OpenGPT with an internal memo called “We Have No Moat.” How should we interpret their internal view with Morningstar’s maintenance of its wide economic moat?
Mogharabi: Actually, I read that. And “We Have No Moat” also referred to basically the overall commoditization of generative AI. So not necessarily just Google, but pretty much every other player on that front. What I would say, though, is that the reason why we’ve maintained our wide moat rating is because of that network effect that they have. And that network effect actually consists of having a huge, huge user base, whether it’s search, whether it’s YouTube, and a lot of its other products. And I think those will remain, especially when you’re talking about—if you want to focus on ChatGPT or Bing’s chatbot, with Bard continuing to improve—and of course, as I’m sure Dan can comment on this with some limitations that Microsoft has actually put on, for example, how much the Bing chatbot can be used—I think that network effect for Google will remain intact. And at the same time, as I mentioned before, they do have the cloud side of the business that can actually help on the computational data storage and just the overall processing that’s necessary for all of this AI to work. So, you put all of that together, I think the network effect and those intangible assets, moat sources will remain intact.
Dziubinski: Go ahead, Dan.
Romanoff: Sorry. I know that question was directed at Ali, but you heard me say numerous times throughout some of my Q&A here is that I don’t think there’s going to be a ton of differentiation here because a lot of smaller companies are just going to be using a model developed by someone else. So, the real differentiation becomes, if you are a hyperscale provider and you have the chips and you have the model and you have the data center and you’re doing all the inference processing, you’re basically the only one that can offer that package. So, I know Google, they said, “We have no moat.” I think they said open AI doesn’t have a moat. And I tend to agree with that. But I mean, ultimately, AI is just a feature within an application. So, if you think about Microsoft Word, the ability to bold something in Microsoft Word isn’t really a competitive advantage, right? Because you can go use Google Docs, and you can bold a word there and it doesn’t make it better or worse. So, I think you’re going to see the same thing with AI. It’s going to be pervasive on any application, and that’s not really the differentiator. All of the other things that gave Google or Microsoft a wide moat, those things still exist. And the introduction of AI into the portfolios, that doesn’t really change the rest of the moat sources that have been built up over time.
Dziubinski: Final question from our viewers today. Which companies under coverage are at greatest risk from the AI revolution?
Romanoff: As I was saying earlier, some of the smaller companies that I cover—so, one of the most obvious use cases to me is in customer service. And if you are a small company, you offer a chatbot. We’ve all been interacting with chatbots for years. I think, generally, they’re pretty terrible, and it’s not a great experience, and it’s very frustrating. So, if you’re one of these companies and you do some sort of customer service, whether it’s a software piece or a contact center piece, I think that there is a little more heightened risk for those guys, because it’s going to become harder to differentiate and it’s going to allow a competitor to do the same thing for maybe cheaper or maybe more efficiently or whatever. And if you’re not using your own models, your differentiation becomes basically nothing. So, that’s going to make it a little harder for them. Within my coverage, Zoom is going to be guilty of this, even though they’re not necessarily a small company. That’s a pretty obvious example. RingCentral is a small company, and they’re most likely I think going to be guilty of this, at least in the near term, too. So, there’s a couple. But I don’t think it’s an existential threat. I just think that the risk there is they have limited ability to differentiate using AI.
Mogharabi: Dan, to a certain extent, I do agree with you. Regarding some of the companies that I cover, I think the perception out there, at least initially, is that the ad holding companies would be at risk because the AI models can actually replace the creativity talent that they have to come up with ideas for creating effective campaigns and so forth. So, that could be a short- to medium-term risk because there’s a perception like that out there. But the way I look at it is that the AI models, for example, what I mentioned earlier, WPP and Nvidia actually creating, the AI models help to make the process, the production process of creative talent more efficient, less costly, but it doesn’t necessarily mean that you no longer need the ad agency’s human minds to come up with that creative idea. So, that’s the way I’m looking at it. I think there is a misperception out there that with AI, the content creation and the idea creation done by human beings is going to be replaced. And I disagree with that. I think it just basically makes the process of production of that content more efficient.
Romanoff: Yeah. It’s funny you say that because a couple of months ago, I think it was Coca-Cola launched an ad, a TV spot, that was created entirely using Stable Diffusion, which is basically just a competitor to OpenAI. OpenAI’s model is DALL·E; Adobe’s model is Firefly for imagery generation, video generation. Again, the ad was great. It was creative. It was amazing. And so, I get that question about Adobe. Isn’t that going to put them out of business? And the answer to that, I think, quite obviously is no. They’ve already developed Firefly, and you still will need a creative human to help create that content or take the content that is created and edit it in such a way that it meets the needs of their client or the product that they’re trying to highlight. It just makes the job more efficient is the way I’m thinking of it right now.
Mogharabi: Yeah, I agree with you, Dan. And actually, the campaign that you just mentioned, the idea was actually created by WPP, which is the main ad agency of Coca-Cola. And it was pretty entertaining.
Dziubinski: That’s a relief to hear. I’d like to thank you both, Dan and Ali, for your time this morning. We appreciate it. I’d like to thank the viewers for attending. And I’d also like to remind you that for the next two weeks, you can experience the best of this year’s Morningstar Investment Conference. Just hop back to the Cvent platform from the link in your email anytime, and you can explore a digital library of some of our favorite sessions from the Morningstar Investment Conference on demand and totally free. And if you’d like what you see, join us next year at the Morningstar Investment Conference on June 26 at Navy Pier in Chicago. Thanks a lot. Take care.

The author or authors do not own shares in any securities mentioned in this article. Find out about Morningstar’s editorial policies.
Transparency is how we protect the integrity of our work and keep empowering investors to achieve their goals and dreams. And we have unwavering standards for how we keep that integrity intact, from our research and data to our policies on content and your personal data.
We’d like to share more about how we work and what drives our day-to-day business.
We sell different types of products and services to both investment professionals and individual investors. These products and services are usually sold through license agreements or subscriptions. Our investment management business generates asset-based fees, which are calculated as a percentage of assets under management. We also sell both admissions and sponsorship packages for our investment conferences and advertising on our websites and newsletters.

How we use your information depends on the product and service that you use and your relationship with us. We may use it to:
To learn more about how we handle and protect your data, visit our privacy center.
Maintaining independence and editorial freedom is essential to our mission of empowering investor success. We provide a platform for our authors to report on investments fairly, accurately, and from the investor’s point of view. We also respect individual opinions––they represent the unvarnished thinking of our people and exacting analysis of our research processes. Our authors can publish views that we may or may not agree with, but they show their work, distinguish facts from opinions, and make sure their analysis is clear and in no way misleading or deceptive.
To further protect the integrity of our editorial content, we keep a strict separation between our sales teams and authors to remove any pressure or influence on our analyses and research.
Read our editorial policy to learn more about our process.
© Copyright 2023 Morningstar, Inc. All rights reserved. Dow Jones Industrial Average, S&P 500, Nasdaq, and Morningstar Index (Market Barometer) quotes are real-time.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

source
Premium Domain Names:

A premium domain name is a highly sought-after domain that is typically short, memorable, and contains popular keywords or phrases. These domain names are considered valuable due to their potential to attract more organic traffic and enhance branding efforts. Premium domain names are concise and usually consist of one to two words or two to four individual characters.

Top-Level Domain Names for Sale on Crocodom.com:

If you are looking for top-level domain names for sale, you can visit Crocodom.com. Crocodom.com is a platform that offers a selection of domain names at various price ranges. It is important to note that the availability of specific domain names may vary, and it’s recommended to check the website for the most up-to-date information.

Contact at crocodomcom@gmail.com:

If you have any inquiries or need assistance regarding the domain names available on Crocodom.com, you can reach out to them via email at crocodomcom@gmail.com. Feel free to contact them for any questions related to the domain names or the purchasing process.

Availability on Sedo.com, Dan.com, and Afternic.com:

Apart from Crocodom.com, you can also explore other platforms like Sedo.com, Dan.com, and Afternic.com for available domain names. These platforms are popular marketplaces for buying and selling domain names. Each platform may have its own inventory of domain names, so it’s worth checking multiple sources to find the perfect domain name for your needs.

#PremiumDomains #DomainInvesting #DigitalAssets #DomainMarketplace #DomainFlipping #BrandableDomains #DomainBrokers #DomainAcquisition #DomainPortfolio #DomainIndustry #DomainAuctions #DomainInvestors #DomainSales #DomainExperts #DomainValue #DomainBuyers #DomainNamesForSale #DomainBrand #DomainInvestment #DomainTrading

Leave a comment