Ben Levinstein is an assistant professor at the University of Illinois, who specializes in formal epistemology, decision theory, philosophy of science, and—increasingly—in the ethics and philosophy of artificial intelligence. He has held post-doctoral positions at Oxford and the University of Bristol.
Much of Ben's past work has developed new accounts of rationality for both belief and action. He is a proponent of so-called “accuracy first” approaches to epistemology, which consider accuracy to be the most foundational consideration for the rationality of belief; and (on some days) of functional decision theory, which understands rational action to be a function of what algorithms that are instantiated in multiple agents would recommend in order to maximize desirable consequences.
More recently, his research and interests have begun to shift to the so-called “alignment problem” in AI. As AI systems gain more and more capability, they will likely meet or exceed human capacities in most cognitive tasks. If and when this happens, we need to ensure that sufficiently advanced AI will cohere with human values and goals. Failing that, humanity is likely to lose its control over the future of its own advancements in problem-solving and decision-making. Some of this research interacts with Ben’s work on decision theory, which investigates how rational agents pursue their aims and interact with one another in complex settings.
Ben—who is currently on leave pursuing a Mellon grant—has been developing a new course on Computation and Humanity’s Future. He regularly teaches classes on epistemology and logic. His current and future research is funded with grants from the Mellon Foundation, Open Philanthropy, and the FTX Longterm Future Fund Regranting Program.