Emerging Business Leaders Hero

Preparing tomorrow’s leaders – today

The Emerging Business Leaders (EBL) program at Gies College of Business is a week-long summer initiative designed for high achieving, underserved rising high school seniors with diverse experiences, perspectives, and goals. Hosted on the University of Illinois Urbana-Champaign campus, students explore a variety of business disciplines, engage with faculty, students, and alumni, and develop critical skills like communication and personal branding, all while living on campus for the week. The program offers a transformative educational experience that helps prepare participants to enter into college, supports students’ consideration of pursuing a business degree, and fosters life-long relationships.

2025 Program Dates Coming Soon.

Program Activities

  • Interactive discussions featuring Gies Business staff, students, and alumni around career possibilities in business and the Gies student experience 
  • Work in groups to solve business problems
  • Learn about college admissions
  • Have fun and make new friends

Application Criteria

The EBL Program is open to underrepresented students entering their senior year of high school. You must have:

  • 3.2/4.0 GPA or higher
  • Demonstrated leadership through extracurricular, volunteer, or work experiences
  • Ability to attend the entire program 


Program Benefits 

All students who successfully complete the Emerging Business Leaders Program will receive a University of Illinois application fee waiver. Students who apply, are admitted, and enroll into Gies Business will qualify for a renewable scholarship up to $5000 to help cover their academic costs.

Admission to the Emerging Business Leaders program does not guarantee admission to Gies Business and/or the University of Illinois.

Gies News and Events

Study: Community Notes on X could be key to curbing misinformation

Nov 18, 2024, 08:13 by Aimee Levitt
Gies research reveals that crowd-sourced fact-checking can effectively curb misinformation, as users are more likely to retract false or misleading tweets when peer-reviewed corrections are added.

When it comes to spreading misinformation — about elections, conspiracy theories, the deaths of celebrities who are actually still alive — social media gets a large share of the blame. Fake news often starts on Facebook or X (formerly Twitter) and spreads unchecked, sometimes taking on the authority of actual news, like the tale of JD Vance’s relationship with a couch.

Journalists and professional fact checkers, like the staff of FactCheck.org, have tried to stop the spread of misinformation, but often by the time a story receives sufficient attention to warrant a check, the damage has already been done.

In 2021, Twitter piloted a new crowd-sourcing program called Birdwatch that was intended to stop social media-based misinformation at its source. Users were encouraged to send in corrections or add context to false or misleading tweets. Later, its name was changed to Community Notes, and after Elon Musk bought the platform in 2022, the program expanded.

“Elon Musk tweeted a lot about this Community Notes system,” said Yang Gao, an assistant professor of business administration at Gies College of Business at the University of Illinois Urbana-Champaign. “He said, ‘This whole idea is fantastic, it will help curb misinformation,’ but without any solid evidence.”

So Gao (left) decided to find that evidence himself. With his PhD student, Maggie Mengqing Zhang of the Institution of Communications Research at the University of Illinois Urbana-Champaign and Professor Huaxia Rui of the Simon Business School at the University of Rochester, he provides solid evidence on how effective Community Notes were at convincing X users to voluntarily retract false or misleading tweets. They’ve announced their findings in a working paper called “Can Crowdchecking Curb Misinformation? Evidence from Community Notes.”

Much to their collective surprise, Community Notes actually works. 

But when they started the project, Gao said, they weren’t sure what the results would be. “People tend to be stubborn,” he said. “Even if the note is true – if it's 100% accurate – they can easily just deny what the note is saying.” There was also the question of whether users would accept corrections (or even criticism) from their peers, as opposed to a higher authority like an X staffer or a professional fact checker. Insiders of Community Notes expressed the concern of partisan bias in the notes that might lead to more polarization. But he could see several advantages in using a crowdsourced fact-checking system like Community Notes.

“It's very hard to scale professional fact-checking,” Gao said. “When you have crowdchecking, well, you're using the wisdom of the crowd, so it's very easy to scale up. That's one advantage. The other advantage is now you're introducing diverse perspectives from the audience. That's in the spirit of democracy to my understanding.”

The Community Notes algorithm is also public, which lends transparency to the program and, therefore, gives users more reason to trust it. It also combats charges of bias by demonstrating that users have a wide variety of political beliefs. 

The hardest part of the research was gathering data. X releases a public dataset of Community Notes every day, but there were too many tweets and notes to monitor manually. At first Gao and his colleagues tried using an application programming interface (API), but the cost was prohibitively expensive – $100 a month for 10,000 data requests – which would not have been nearly enough. Instead, they created their own system: they would download the public Community Notes data set every day, and then they developed an automated tool to do a day-to-day comparison and determine which tweets had been retracted. They did this every day between June 11 and August 2, 2024.

Once they’d assembled the dataset of 89,076 tweets, they used a regression discontinuity design and an instrumental variable analysis to examine whether a publicly displayed note under a tweet leads to a higher chance of voluntary tweet retraction.

The data showed that X users were more willing to retract their tweets in response to notes.

This finding is strikingly promising for social media platforms because users’ voluntary retraction, in contrast to forcible content removal, may face less criticism for infringing on freedom of speech, reduce polarization, and eventually “bring down the temperature” as President Joe Biden recently remarked. In other words, crowdchecking, like Community Notes, strikes a balance between protecting First Amendment rights and the urgent need to curb misinformation.

But it took some further analysis to figure out why. 

There are two forms of influence in social media, Gao explained, observed influence and presumed influence. Observed influence is determined by how many people actually interact with an individual social post – for example, a tweet by someone with only a few followers that somehow goes viral. Presumed influence is how influential the user thinks they are based on the number of followers they have. A tweet by a person with 100,000 followers may get only a few likes, but it will, presumably, be seen by a lot of people.

It turned out that observed influence drove retractions far more than presumed influence. This made Gao realize that to make Community Notes even more effective, X should not only notify people who interacted with a tweet that received a note, but also all the followers of the person who wrote the tweet, who may have seen and absorbed the misinformation without any further interaction.

In the future, Gao wants to continue his investigations of misinformation on social media, particularly the way generative AI has been used to produce misinformation (e.g., deep fakes). He and his coauthors are still waiting to see the reaction from the current paper. He thinks it could have wider implications for social media platforms besides X. At the moment, the only other two social media platforms that have crowdsourced fact-checking are YouTube and Weibo, the Chinese microblogging site.

“Based on our findings,” he said, “we can say this crowdchecking system is working pretty well. The government should consider legislation or provide support to those social media platforms to help them build similar systems. Most importantly, the details of such systems should be transparent to the public, which is the key to foster trust.”