Main Navigation

Home U Rising Here’s how the University of Utah will showcase the responsible use of AI


The University of Utah has long been at the center of revolutions in technology, from development of the internet to innovations in graphic computing. Now the U is poised to take the same leading role with artificial intelligence. In this episode of U Rising, host Chris Nelson talks to Professor Manish Parashar, who is overseeing the university's Responsible AI Initiative.

Subscribe to the U Rising podcast on your favorite streaming platform, including Apple Podcasts, Spotify and Google Podcasts. You can also access episodes of U Rising on our news website, linked here.


Chris Nelson: The University of Utah has long been at the center of revolutions in technology, from development of the internet to innovations in graphic computing. Now we're poised to take the same leading role with artificial intelligence.

My guest today is Dr. Manish Parashar, who is overseeing the U’s Responsible AI Initiative — AI as in artificial intelligence.

Professor Parashar is going to share the big ideas behind the initiative and how the U plans to model best practices and deep deployment of this rapidly evolving technology. Welcome to U Rising, Manish.

Manish Parashar: Thank you, Chris. Delighted to be on this podcast.

Chris Nelson: So in a recent commentary in the Tribune, you wrote that throughout your career as a computer scientist you've always been drawn to solving challenges. Was it that challenge, the responsible use of AI, that led you to the University of Utah?

Manish Parashar: Yes. Throughout my career, I've always focused on how can I leverage my expertise in computing, in data, to address challenges that are important to science and society. And two big aspects of that are multidisciplinary research, working with experts on different disciplines, bringing them together toward addressing these issues, and translation, addressing the problems conceptually, but then building solutions that can be used to actually have an impact on that problem.

And those were the two aspects that I saw when I looked at Utah as an opportunity for my career, right? This is integral to many of the examples you gave at the beginning, whether it's the internet, whether it's graphics, where they really came together across many disciplines, brought those things together, built solutions that have transformed the fields. And the opportunity to do that is what attracted me to Utah.

Dr. Manish Parashar is leading the U's Responsible AI Initiative.

Now, responsible AI is one example where we are doing exactly that. We are taking AI along with different areas where AI can have an impact, both technically, but also the social, technical aspects, the policy aspects, the ethical aspects, putting them all together in a multidisciplinary way to really impact challenges that are important to this region, such as mental health, such as environmental sustainability, water sustainability, air quality, workforce. How do we have that impact? And that's really what brought me to Utah.

Chris Nelson: Yeah. Well, let me touch on that because I think one of the unique things, I mean if you look nationally, there's a lot of people investing in AI, right? That's not unique, but it is that word “responsible,” and like you said, it is responsible, but also a local iteration of that. Is that where you think we'll make our name in the AI space?

Manish Parashar: Absolutely, right. I think that two aspects that differentiate us from everything else that's going on. Yes, there are large amounts of investments in industry, in academia, in government and AI, and they're all advancing AI. What's unique here really, again, builds on what's in the DNA of this university.

It's taking technology as it exists today, applying it to solve important problems and through that translation process, advancing the technology, transforming the technology itself. We did that with networking. We did that with graphics. We did that with imaging. We can do that with AI. The other aspect is, again, what you mentioned, responsible. We don't want to do and layer up the different aspects of being responsible, such as fairness, such as privacy, such as ethics, transparency. We don't want to layer them on. We want to build them ground up in the technology that we build, but also in the applications that we build and the applications that we deploy. We want to make sure that we are thinking of this as an integral part of developing the solution. So that's really what’s going to stand us apart from everything else.

Chris Nelson: For those who don't follow the AI world, are we in the infancy of this technology? Or is it an adolescent? Where are we right now?

Manish Parashar: So, AI has existed for a long time right? The initial term was coined in the early fifties when it came out, and it has evolved since then. But what has really transformed it in the last decade or so is the availability of large amounts of compute and large amounts of digital data. So the methods have evolved and have existed for several decades now, right, but the ability to then apply those methods on a lot of data. Think about all the literature that has ever been published in human history, take all the data that we have on the internet and then apply these methods, these algorithms on that data because we have the computing power to do that.

That is what has transformed AI. Now you can do many interesting things that you're not able to do before, and that's really what has really made AI such a pervasive, such a dominant technology today.

Chris Nelson: You know, you and I were just at a meeting with leaders in this field and I was really intrigued with a question that came up, which is, is ‘artificial intelligence’ the right name for this? If you were to go back and rename it, would you call it something else? Or is ‘artificial intelligence’ the right way to think about it?

Manish Parashar: That's an excellent question. I think using the term artificial intelligence has certain implications that came up in that discussion around the ability to reason, the ability to think critically, about something, which the current state of the art doesn't have.

If you look at the current state of the art, it has the ability to see patterns in tremendous amounts of data and using that, using simple statistical techniques, interpolate and possibly extrapolate. So it's being able to . . . if you have seen all the literature that's ever been written and high probability it can guess what the next word you're about to type is and suggest that. And when it extrapolates, it often gets it very, very wrong.

But that's the state of the art. Critical thinking doesn't exist. Being able to find connections that are new and novel is not something. Simple common sense is not something that AI can do today. So in that sense, maybe the term AI has connotations that may be not quite accurate and leads to expectations or assumptions that are not correct. When someone thinks about it as an AI tool, it expects things to be correct and intelligent, which they are not really today, right? So that's in that sense, maybe it is a misnomer.

Chris Nelson: Yeah. Well, with that comes, you know, the obvious pitfalls. And again, you know, I'm a fan of movies and so everything from The Terminator to the End of the World. So, you know, as a scientist and someone who's in this field, is there an assurance we need to provide listeners around the dark side of this? You know, everything from malicious applications, I mean, everything that can be used for good can also be used for bad. But again, there's so much potential here. I'm just curious how you answer that question and your colleagues talk about that.

Manish Parashar: So, I'm very optimistic. I see tremendous potential in AI. It does things that humans don't do very well, assimilate tremendous amounts of data and find patterns quickly, right? So in that sense, it can really help augment humans, providing them tools that can make them significantly more effective in whatever they do. So, I think it's a tremendous technology.

I recently heard and thought, that’s a great analogy, it compares to fire. Fire can be a really amazing tool, but we can have really disastrous effects if we don't use it carefully. And so you can think of AI in the same way. It has tremendous potential if we use it correctly, but if you don't, it can also do tremendous amount of damage. It can do things at scales, at speeds, that we cannot imagine. And so we have to be careful. And that's really what responsible AI is, to be able to build AI with the technology, with the guardrails, with the understanding and awareness so that it can be used effectively.

Chris Nelson: On that front, you mentioned some of the responsible AI projects that we will use at Utah and literally to benefit our state. Can you give us a taste of maybe the specifics of some of that? Are there other projects in the queue? This would give listeners a sense of how we're thinking about this.

Manish Parashar: Absolutely. And this is really evolving very, very quickly. The initiative itself is quite young. It was announced in October, and we are working hard to start deploying things and it's really, as I mentioned, going to address three key areas that are really important to this region.

For example, mental health. How do we bring the different dimensions that can have implications on mental health, environmental, health-related, right, based on historical aspects, lifestyle, education, training, right? So they can bring a lot of different experiences. Can we bring all these different dimensions to be able to understand for an individual what are the aspects and how best to address those aspects? Because the circumstances of each individual are quite unique and often data that medical professionals ask are quite siloed. What AI allows us to do is to look at many different aspects at the same time, find different patterns and correlations and understand for an individual what is the right approach to address those challenges.

And the same thing for you can think around air quality. There's so many aspects that impact air quality, the current weather patterns, what's happening in our surroundings, for example. If there's a fire in California or somewhere on the West Coast that's going to impact us. What is the pressure patterns? And that'll impact what is the local state of the air quality, the pollution currently. So, if you combine all those together, it'll help you understand what the impact is going to be, how fast does that impact coming, inversion, how can we anticipate it and prepare for it in, for example, creating shelters for individuals, giving warnings so people can go in those. Designing the right type of filters?

So there's so many things we can do once we understand the many dimensions of it, and the same thing applies for other aspects of the issues that we are looking at, right? So these are just some examples on bringing the power of AI, along with the data sets we have, the expertise we already have and the understanding we already have at the university. We have this unique ability to address these problems.

Chris Nelson: Something you said in there, I remember . . . I've worked at the university for a long time and I know in the nineties we talked about supercomputing, in early 2000. Really, this is just along that continuum. It's now kind of the applied supercomputing basically. Is that a fair way to think about it?

Manish Parashar: Right. I mean, at its essence, AI is a lot of data, a lot of computing to process that data and the algorithms that go with it. And so computing is an integral part of it. Having access to sufficient computing is critical. So, one of the big investments we are going to make as part of the AI initiative is to help create or provide that computing infrastructure to the researchers at the university, but across the state, right, so they have the tools that they need to be able to compete in this AI landscape.

Chris Nelson: How are we approaching the ethical side of it? I know we've got out of the basic sciences and the research areas. Do we have folks from humanities and law participating in this? What does that engagement look like?

Manish Parashar: Absolutely, right. So right now we have created campus-wide working groups that are trying to take this high-level framing and then interpret it into specific research challenges and teams that can come together to address those research challenges that build on the strengths that we have at the U, aligned with challenges that we see in this region. And so that's going on.

They include individuals from the health sciences campus, from main campus, from the humanities and social and behavioral sciences, from education, from law and policy, from social work, right. It's truly university-wide that are coming together to address these challenges. And I think that's the kind of teams you need to be able to address this in a truly responsible way, to build technologies that are responsible, to build policy frameworks that can provide guardrails to make sure it's responsible. But I think most important to create awareness in the community of what the implications are so that they understand both the strengths as well as the limitations of these solutions.

Chris Nelson: Well, and you're the right person at the right time. I know you've held roles at the national level and you've helped shape broad vision and models for AI application. The university's part of, if I have this right, the National Artificial Intelligence Research Resource Task Force. That's quite an acronym. Can you talk about that work a little bit?

Manish Parashar: Absolutely right. So, in about ‘21, as part of this broad activity around the AI initiative, the National AI Initiative Act, there was a recognition that AI and the ability to contribute to the AI research development ecosystem depends on having access to computing and to data, to the infrastructure needed. And that, understanding how transformative AI as a technology is to science, to society, it was a concern that only the largest and most well-resourced institutions would be able to contribute because they have access to these capabilities, to the infrastructure. So the Congress was thinking about how do you democratize access to the resources so that everybody can contribute? And the benefits for doing that, right, it improves the quality of the research, brings new approaches to AI, ensures that the models are more representative, are fair. So, there's many advantages to broadening this ecosystem. It just increases overall research competitiveness if more people can contribute to it.

So Congress mandated that there be a task force be set up, it be led by NSF and the Office of National Science Foundation, and the Office of Science and Technology Policy. They talked about the composition. It included representation of government, academia and industry. And its job was to create a vision and an implementation plan for a national resource that could democratize AI R&D.

I had the privilege of co-chairing that task force. We worked for 18 months and we released a report back in January of 2023 that went to Congress and the president said that, okay, this is what a resource looks like. This is how you can build it, this is what it's going to cost and here's legislation that you can use to appropriate funds to build this resource. We priced it at $2.6 billion and included partnership with industry, with government, with academia to build this resource and stand it up.

It's still being considered in Congress today, but as part of the Executive Order that came out last fall, it mandated that NSF should stand up a pilot, which is now operational. So, there's a pilot effort demonstrating both feasibility and value of such a resource. It has been stood up and we are still waiting for the whole NAIRR, or National AI Research Resource, to be stood up, funded, stood up.

Chris Nelson: Yeah. It’s great that we've got some say into that and being part of that ground-level work. I know there's an event coming up on May 8, a One University town hall, so that kind of brings it back to the local level. Do you want to talk about that a little bit for our listeners who are on campus, why would they want to participate in that? What will they get out of that? What are the details around that event?

Manish Parashar: So, the goal of that event is to communicate what this initiative is. I think it's a unique opportunity, a very visionary opportunity, that the president has provided to us to have a huge impact. So we want to communicate what it is, but also communicate how can everybody participate in it, right? It is truly a One U initiative and I think to be successful, we need everybody to be part of it, to contribute to it and see how they can benefit from it. So I'm hoping to have that conversation to communicate these different dimensions of this initiative, but also open it up so that people can ask questions and then find out more about it and how to participate in it.

Chris Nelson: And listeners can find that information via the . . .

Manish Parashar: We do have a webpage for the initiative, it's r-a-i for Responsible AI, so

Chris Nelson: Excellent. And that is coming up on May 8 from 9 to 10. And just an editor's note for those when we talk about One U, if you're not on our campus, we talk about kind of one university, and that's our attempt to do the difficult work of breaking down silos and working across disciplines and working across colleges. I know, Manish, when I talk to people outside of the organization, they're like, well, yeah, we never assumed you weren't one university, but internally, there's always work to be done.

Well, that will be awesome. Hey, my last question is, the reality is AI is already permeating our lives, whether it's through the technology we use, it might be using ChatGPT to help supplement a student writing a paper or something, but with all that in mind and all the conversation around ethics, are you optimistic? Do you have a hopeful message about the potential of AI? I know the answer to this question, but I'm going to give it to you as a softball to answer.

Manish Parashar: Absolutely, no, I think I'm extremely optimistic about the transformative potential of AI. I think it's a tremendous technology, can have a tremendous impact on all aspects of our life — our  ability to innovate or to scientific discoveries, economic development, it can have a tremendous positive impact. I think we need to do this in a responsible way, which means that we need to be conscious of its negative aspects, of its negative impacts, being able to build those guardrails in there and also build awareness so that people understand both the positive aspects but also the concerns and so that we can use it and have these positive impacts that we want to have.

Chris Nelson: Dr. Manish Parashar oversees the University of Utah's Responsible AI Initiative. Manish, thanks for being our guest.

Manish Parashar: Thank you very much for this opportunity. I really enjoyed the conversation.

Chris Nelson: Listeners, that's it for today's episode of U Rising. Our executive producer is Brooke Adams and our technical producer is Robert Nelson.

As the semester comes to a close, I want to let you know that U Rising will take a short hiatus and return in a few weeks. On behalf of co-host Dr. Julie Kiefer and myself, I want to thank you for listening to our conversations about the great people, programs and initiatives happening at the U. I also want to thank our founding Executive Producer Brooke Adams. This is her last podcast with us. She's moving on to wonderful new things and this podcast would not have happened without Brooke.

I'm Chris Nelson. Thanks for listening.