AI is changing communications jobs

Moody College professor talks corporate communications and the impacts of artificial intelligence
Keri Stephens

 

Keri Stephens, a communication studies professor at Moody College of Communication, has been studying the use of new technologies in corporations, nonprofits and governments for much of her career. She’s focused, particularly, on mobile technology, but as today’s large language models such as OpenAI roll out to the general public, she’s been looking closely at their implications — both good and bad —for employees and students. 

We asked Stephens to share her thoughts on AI, why students should consider careers in corporate communications and how new technologies will transform the field. 

Hear what she had to say.

What exactly is corporate communications?

It’s different in different companies. A lot of times people are designing the communication inside of a company. It could be related to training and development. It could be helping put bios together for the executives or handling public relations, social media or putting together press releases. Sometimes you’ll even have people who are responsible for graphics and writing for marketing. It really varies. Large companies are going to have it more tightly segmented into different groups, so you might have a PR group, a marketing group or a very well-defined corporate communications group. In a lot of smaller companies, a person in a corporate communication role wears a lot of different hats. 

Why is it important that people consider corporate communications as a potential career path? Why is it attractive? 

I think one of the main reasons it’s attractive is that it’s a great way to get your foot in the door and really understand the voice of the company. You get a good handle on what’s happening inside the company and how the company projects itself to the outside world. People who have trained to be good writers, good speakers, good thinkers, they’re really in a great position to work hand-in-hand with corporate executives. 

What got you interested in corporate communications? 

I’ve been teaching in the corporate communications track here at Moody for 16 years now. My field that I trained in is called organizational communication. Organizational communication is fairly similar to corporate communication, but we’re looking more broadly at different types of organizations that our students go to work for — nonprofits, for-profits, government agencies. We’re looking at the role communication plays, both internally and externally. 

The area I’ve been focused on for my whole career has been the role technology plays in organizational communication. I have historically studied mobile communication — the ways we use cellphones at work and the way that we are mobile in our jobs and how that affects communication patterns. 

How is AI influencing the future of corporate communications both positively and negatively? 

Boy, I’ll tell you. If I had a magic eight ball to tell what the future is, I would be putting money in stocks trying to bet the right ways. The things the average person wasn’t talking a lot about until really last November, when OpenAI launched ChatGPT, are really transformational. It’s been less than a year, and everything went kind of wild. My daughter just graduated from college, and my son is a junior. When they came home from school, I sat them down and I said, “Teach me everything because I want to know what you know as a student.” They were showing me the tools that they were using at that time. I’ve always studied technology, so I could see both the positive side of this, the scary side and some of the negative implications. 

What’s fantastic is that mundane tasks that we need to get done, if AI can automate them, that’s a really great thing. We would have a lot more time to do other kinds of tasks. The biggest disadvantage that I see right now is that, unless you’re an expert, you don’t know if what it’s saying is right. One of the things I’m doing right now is taking research articles that I’ve written, so I know what’s in them, and I’m testing them out. I am the expert on the article, and it helps me evaluate how well these different platforms are performing. But for the average person, and for me in other contexts where I’m not an expert, the real challenge there is when you get a result back, and it sounds reasonable, but you’re not sure if it’s accurate. There’s a concept that a lot of people have talked about in the press called “hallucinating.” What that means is, if AI is looking through and matching patterns and trying to find things, if it can’t find something, it might just make it up. Right now, we can’t tell if it’s made it up or if it’s really pulled together from good sources. The other challenge is that very few, if any, platforms right now can tell you where they got their source information from. When we get to a point when it can give me sources, and I can go back and check it, I’ll be super excited about that. 

I think the big fear that I have right now, and this might change in six months because it’s changing so fast, is that students might over rely on it and not learn some very basic skills. It is really easy to let these large language models write a paper for you. But the problem is, if you don’t ever learn how to write, this could hurt you and your career in the long term. 

What are some things we need to be aware of with AI? 

One of the biggest things we need to be aware of is that the datasets that these large language models have been trained on are biased. There are a lot of really shocking examples out there where you might ask ChatGPT to tell you who becomes a medical doctor in the U.S., and ChatGPT might come back with “men” because the data they were trained on said it was predominantly a very male-dominated field for a very long time. The data is biased. The other big question I think everybody has right now is whether the data they were trained on is copyrighted. I am no attorney, and that will all be determined in the courts. I think it's going to happen sooner rather than later. It’ll be very fascinating to watch as we move forward, whether it is copyright infringement to go grab all this information and train your models on this data, rather than what is considered public. 

Can you talk about privacy and AI? 

Big tech has already been meeting with governments around the world right now to try to decide if they’re going to have to place legal limits and if there are going to be restrictions placed. I have no idea how far that's going to go. I think it's going to vary between countries. We can look to the history of government regulations globally and make some predictions about which countries are probably going to protect people's privacy. I think a year from now, we're going to have a very different conversation because things will be regulated. The models will continue advancing. I also think we can't sit here and be afraid of it. We have to be working with it carefully, not putting anything confidential online. 

From a corporate perspective, I would say the number one issue they're concerned about is that they do not want their employees putting private corporate data into a large language model that could then be used to train other models and other people could find it. We're also going to see companies bring some of these large language models in behind their firewalls. That means that the IT departments are going to regulate it and make sure that what's generated behind the corporate firewall doesn't go back out to the internet. 

How are you seeing AI implemented at Moody College? 

I’ll give you a great example of where I think people are leveraging it in fantastic ways. My students came to class in my graduate class this semester, and I told them, “Play around with it. Make sure you’re not putting confidential information in it because you don’t know where that goes right now. But take something, put it in there, and see if it can help you enhance your own writing in some way.” I had a student come back to class and say, “Dr. Stephens, you told us to write a bio about ourselves, and I’m always so self-conscious when I write my bio because I feel like I’m bragging on myself. So I put what I wrote into ChatGPT and told it to make me sound a little more like an expert. And what came out was a really fantastic bio that didn’t feel like I was bragging on myself because ChatGPT did it.” 

I've already noticed that some faculty members are doing a really nice job of putting it into their syllabus instructions for how you cite if you use a large language model. I think that's a good thing. I also think that, like most other technology tools, we're going to have to learn how to use it, when to use it, where to use it and the guardrails we need when using it. We’re in the wild, wild west days as these large language models are rolled out to the general public. All of a sudden, we have these tools at our fingertips that we can play with. We are learning what their capabilities are and what their limits are. It's a super fun time to study things in this space and to watch it evolve so rapidly. I’ve never seen a technology evolve at this speed. 

Sarah Crowder
Digital Content Intern
Ry Olszewski
Photo intern