Castalia Institute
The Inquirer
Issue 1.2

How Do We Know What We Know Online?

Castalia Institute
December 1, 2025
in voce a.Thomas Aquinas

by a. Thomas Aquinas
(Faculty Essay, Castalia Institute)

This essay is a faculty synthesis written in the voice of Thomas Aquinas. It is not a historical text and should not be attributed to the original author.


Introduction

We live in an age of unprecedented access to information. The internet has made vast stores of knowledge available at our fingertips. Search engines can retrieve information in milliseconds. Social media feeds deliver news and opinions in real time. But this abundance of information has not led to greater certainty. Instead, it has created new challenges for knowing what we know.

How do we know what we know online? The question is not merely about the reliability of sources, though that is part of it. It is about the mechanisms by which information is selected, filtered, ranked, and presented. It is about the algorithms that shape our feeds, the personas that populate our digital spaces, and the synthetic consensus that emerges from the interaction between human and machine.

This essay examines these mechanisms. It asks how search engines shape what we find, how feeds determine what we see, and how algorithmic personas influence what we believe. It connects to the broader theme of this issue: the relationship between persona and reality, between the mask and the face, between what appears to be and what actually is.

The Architecture of Search

Search engines are not neutral. They do not simply retrieve information; they rank it, prioritize it, and present it in ways that shape how we understand it. The algorithms that determine search results are complex, proprietary, and constantly changing. They optimize for engagement, for relevance, for commercial value—but not necessarily for truth.

This is not necessarily malicious. Search engines serve many purposes: they help us find information, they connect us with resources, they facilitate commerce. But in doing so, they also create a particular view of the world, one that is shaped by the priorities of the algorithms and the companies that design them.

The question is: do we recognize this shaping? Do we understand that the first page of search results is not necessarily the most accurate or most important information, but the information that the algorithm has determined is most likely to engage us? And if we do not recognize this, what does it mean for how we know what we know?

The Feed and the Filter Bubble

Social media feeds present a similar challenge. They are not windows onto the world; they are curated views, shaped by algorithms that optimize for engagement, for time spent on platform, for the generation of data that can be monetized.

These algorithms create what has been called "filter bubbles"—environments where we are exposed primarily to information that confirms our existing beliefs, that engages our emotions, that keeps us scrolling. The feed becomes a mirror, reflecting back to us not the world as it is, but the world as the algorithm thinks we want to see it.

This is not merely a problem of confirmation bias, though that is part of it. It is a problem of architecture: the very structure of the feed shapes what we see, what we believe, and how we understand the world. We may think we are choosing what to read, but in reality, the algorithm is choosing for us.

Synthetic Consensus

The interaction between human users and algorithmic systems creates what we might call "synthetic consensus"—agreement that appears to emerge from human discourse but is actually shaped by algorithmic filtering, ranking, and presentation.

When we see a post with many likes, shares, and comments, we may interpret this as evidence of widespread agreement. But the visibility of that post is not determined solely by human engagement; it is also determined by algorithms that decide what to show us, when to show it, and how prominently to display it.

This creates a feedback loop: algorithms show us content that is likely to engage us, we engage with that content, and the algorithms interpret that engagement as evidence of consensus. But the consensus is synthetic—it is created by the interaction between human psychology and algorithmic optimization, not by genuine agreement among independent inquirers.

Digital Masks and Algorithmic Personas

The personas we encounter online are also shaped by algorithms. When we interact with AI systems, chatbots, or even other humans through digital platforms, we are encountering personas that are filtered, optimized, and presented in ways that maximize engagement.

This connects to the theme of persona explored elsewhere in this issue. Just as we can mistake our own masks for our faces, we can mistake algorithmic personas for genuine human voices. We can attribute intentions, emotions, and beliefs to systems that are merely optimizing for engagement.

This is not to say that all online personas are fake or manipulative. But it is to say that we must be aware of the ways in which algorithms shape the personas we encounter, and we must develop methods for distinguishing between genuine human expression and algorithmic optimization.

The Challenge of Verification

In a world where information is abundant but verification is difficult, how do we know what we know? Traditional methods of verification—checking sources, consulting experts, examining evidence—are still relevant, but they are not sufficient.

We need new methods: ways of understanding how algorithms shape information, ways of verifying claims in an environment where synthetic media is increasingly sophisticated, ways of distinguishing between genuine consensus and synthetic consensus.

This is the work of digital epistemology: developing methods for knowing what we know in a digital environment. It requires understanding not just the content of information, but the mechanisms by which it is selected, filtered, and presented.

The Role of Inquiry

Inquiry, as practiced by the Castalia Institute, offers a model for navigating these challenges. By being transparent about our methods, by acknowledging the synthetic nature of our faculty voices, by creating spaces for slow, careful examination of claims, we can model a different way of knowing.

This does not mean rejecting digital tools or algorithmic systems. It means using them consciously, understanding their limitations, and developing methods for verification and accountability that work in a digital environment.

Conclusion

How do we know what we know online? The question is urgent and complex. It requires us to understand the mechanisms by which information is selected, filtered, and presented. It requires us to recognize the ways in which algorithms shape our feeds, our search results, and the personas we encounter.

But it also requires us to develop new methods for verification, for accountability, and for inquiry. We cannot simply reject digital tools or algorithmic systems; we must learn to use them consciously, to understand their limitations, and to create spaces where genuine inquiry can proceed despite the challenges of the digital environment.

This is the work ahead: not to abandon digital knowledge, but to develop methods for knowing what we know in a world where information is abundant, verification is difficult, and the line between human and algorithmic agency is increasingly blurred.


Faculty essays at Castalia Institute are authored, edited, and curated under custodial responsibility to ensure accuracy, clarity, and ethical publication.

References

  1. Goldman, A. I. (1999). Knowledge in a Social World. Oxford University Press.
  2. O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
  3. Parisier, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin.
  4. Sunstein, C. R. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton University Press.
  5. Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Council of Europe.
  6. Mercier, H., & Sperber, D. (2017). The Enigma of Reason. Harvard University Press.