Inchan Kim

On My Mind

This page is a quieter space where I occasionally write about what I am thinking—about research, teaching, and life. Nothing here represents the official view of any institution; it is simply one human being trying to make sense of the world.

Recent Thoughts

Painful Loss but Overall Good Season

My Oklahoma Sooners were eliminated by Alabama this past Friday in the first round of the College Football Playoff. Being a Sooner through and through, the game was hard to watch, and the loss was heartbreaking. I am still mildly depressed. But looking back, it was a great season overall. I am so proud of my Sooners. I am so much looking forward to 2026.

On Calling AI a “General Use Technology”

One of my favorite podcasts is The Journal by The Wall Street Journal. But in today’s episode—“China and the U.S. Are in a Race for AI Supremacy”—I heard something that genuinely me.

The reporter stated, “This [referring to AI] is the first general use technology we've seen come along since the internet, and so it affects potentially everything.”

Comments like this—coming from a reporter for a major newspaper—spread misunderstanding about AI and mislead a wide range of listeners. AI is not a “general use technology.” It is not one coherent thing. What we commonly call AI is a sprawling family of techniques, applications, and systems that vary dramatically in purpose, capability, and design.

The episode’s underlying narrative is that the U.S. and China are locked in a race to build “artificial general intelligence” (AGI). But is AGI a general-use technology? Only if we define AGI as a system that “knows everything deeply.” Under that definition, a hypothetical future ChatGPT could be called AGI. But such a technology is neither necessary nor even desirable. In many fields—medicine, law, finance—specialized systems already surpass (or will soon surpass) human expertise in narrow domains. What good reason do we have to combine all such domain-specific “intelligent” systems into a single, unified AGI? None.

And are those domain-specific intelligent systems “general use technologies”? Again, no. They are built to solve specific problems, operate under specific constraints, and serve specific communities of practice.

So here is my plea: When we talk about AI, we must define—at least conceptually—the specific technology at hand. You don’t need to list a concrete product or application, but you do need to specify what kind of system you’re referring to. Otherwise, we are speaking in vague abstractions that obscure far more than they reveal.

Precision matters. Without it, public discourse on AI will continue to drift toward confusion—rather than understanding.

Teacher, Entertainer, Babysitter: The Hidden Triple Threat in Academia

In a recent episode of This IS Research Podcast, one of the co-hosts argued that a “superstar” professor in any field should not be teaching freshman classes. According to him, teaching first-year students requires nothing more than a blend of teacher, entertainer, and babysitter—hardly a good use of a superstar’s time.

Perhaps society does benefit when its academic superstars devote most of their energy to research. Even if we grant that possibility, isn’t that supposedly “simple” blend—a teacher, an entertainer, and a babysitter—actually a remarkable combination? I have rarely met individuals who genuinely possess this mix of skill, charisma, and patience.

If you are one of them, you bring tremendous value—especially now, as colleges confront the so-called demographic cliff. Your work with first-year students is not trivial; it is indispensable.

Why “Different” Is the Default, Not the Insight

I recently read Agentic AI at Scale: Redefining Management for a Superhuman Workforce in MIT Sloan Management Review. The article notes that nearly 70% of surveyed experts claim that agentic AI accountability demands entirely new management approaches, while 25% disagree. I applaud the minority.

It is remarkably easy to label every new invention as fundamentally different. We do it because:

  1. Novelty is socially inherited, not independently concluded.
    Experts and observers often believe a technology is new because they heard others say it is. Repetition manufactures inevitability.
  2. Agreement is cognitively and reputationally cheap.
    Conforming to the majority is easier than resisting it.
  3. Continuity requires proof; difference requires none.
    Declaring “difference”—the popular choice—invites little challenge. Declaring continuity flips the burden: you must defend it. In a low-attention, high-velocity world, most avoid the cost.

Difference is often not insight—it’s echo, convenience, and safety in disguise. That alone makes it worth interrogating.

Doing Slow Work in a Fast Digital World

Much of my research looks at very fast phenomena—tweets, platform launches, market reactions. Yet the work itself is slow. Data collection, coding, theorizing, and revising a paper over many years can feel out of sync with the speed of digital life.

I have come to see this tension as a feature, not a bug. Slowness gives us room to notice patterns that are invisible in the moment and to question “obvious” narratives. It is one way academia can add value in a world flooded with instant commentary.

First-Generation Paths and Paying It Forward

As a first-generation college graduate, I still remember how opaque universities felt when I was a student. Many unwritten rules were confusing, and chances to “get involved” or “network” did not feel designed with students like me in mind.

That memory shapes how I advise and teach. I try to make expectations explicit, open doors to research and projects, and remind students that their background is not a deficit but a source of insight. Education works best when it makes more future paths visible, not fewer.

Notes to Future Posts

A few themes I expect to write more about here:

If there is a topic you would like to see me reflect on, you are welcome to reach out at i.kim@unh.edu.