Left Picture

Michael Gleicher

Professor Department of Computer Sciences University of Wisconsin, Madison 1210 West Dayton St. Madison, WI 53706 gleicher@cs.wisc.edu

Office Hour: (Summer 2022) By appointment.

I am a professor working in areas related to Visual Computing. My research these days is mainly about robotics and data visualization. With both, I am interested in how we can make them useful for people. I remain interested in animation, virtual reality, multimedia, …

A brief biography will tell you how I got here. You can see a reasonably current CV, but you probably are looking for papers, talks, videos or advice.

Teaching: This semester (Fall 2022), I will teach CS765 Data Visualization. Last semester (Spring 2022), I taught CS559 Computer Graphics.

I have some pages with various Advice I generally give to students. This includes the format for status reports, what I’d like to see in Prelims and Theses, my grad school FAQ, or my advice on how to give a talk.

You might be interested in my grad school FAQ. Come and talk to me if you’re interested in data visualization, robotics, computer graphics or related topics. If you are an undergrad and looking to work on a project, please see Undergrad Research, Projects and Directed Studies. If you are asking about a reference letter, please see Reference Letters for Students in Classes.

If you’re interested in joining our group, come talk to me! If you aren’t a student at Wisconsin yet, please look at my grad school FAQ, particularly the last few questions.

Current Research Themes

The projects list was more than slightly out of date. I need to revitalize it. But, there are several things going on with robotics (tele-operation, providing awareness to remote users, using novel sensors, …) and visualization (summarization, text collection exploration, uncertainty, …).

Selected Past (but recent) Themes

Communicating Physical Interactions: We are working on ways for people and robots to communicate to each other about how objects should be manipulated in the world. Manipulations necessarily involve physical interactions (e.g., forces must be applied correctly). We are exploring ways for people to tell robots how to act with appropriate forces (e.g., to teach manipulation skills) as well as for robots to communicate back to people about the actions they are performing.
Communicative Robot Motions: If robots are going to work around people, it will be important that people can interpret the robots movements correctly. We are developing ways to make robots move such that people will interpret them correctly. For example, we are considering how to design robot control algorithms such that the resulting movements are understandable, predictable, aesthetically pleasing, and convey a sense of appropriate affect (e.g. confidence).
Interacting with Machine Learning: People interact with machine learning systems in many ways: they must build them, debug them, diagnose them, decide to trust them, gain insights on their data from them, etc. We are exploring this in both directions: How do we build machine learning tools into interactive data analysis in order to help people interpret large and complex data? How do we build interaction tools that can help people construct and diagnose machine learning models?
Visualizing Comparisons for Data Science: Data interpretation tasks often involve making comparisons among the data, or can be thought of as comparisons. We are developing better visualization tools for performing comparisons for various data challenges, as well as to developing better methods for inventing new designs.
Communicative Characters: We are working on better ways to synthesize human motions to make animated characters (both on screen and robots) that are better able to communicate. Generally, we focus on trying to make use of collections of examples (such as motion capture) to build models that allow us to generate novel movements, or to define models of communicative motions.
Perceptual Principles for Visualization: Understanding how people see can inform how we should design visualizations. We have been exploring how recent results in perception (e.g., ensemble encoding) can be exploited to create novel visualization designs, and how principles of perception can inform visualization designs.
Visualizing English Print: To drive our data science efforts, we took a specific application: working with English literature scholars to develop approaches to working with large collections of historical texts.
Video, Animation and Image Authoring: Our goal is to make it easier for people to create useable images and video. For example, we have developed methods for improving pictures and video as a post-process (e.g. removing shadows and stabilizing video). We have also worked on adapting imagery for use in new settings (e.g. image and video retargeting or automatic video editing) and making use of large image collections (e.g. intestingness detection or panorama finding).


These are the main classes I teach. You can see more on the Graphics Group Courses Page.

Older classes that might not get taught again for a while:

  • CS777: Computer Animation is a graduate level CS class for people with some graphics background. This taught was taught regularly in the past (2013, 2011, 2006, 2004, 2003). It kindof died off from lack of interest (student interest and my interest)
  • CS679 Computer Games Technologies: this class was popular, so I tried to teach it regularly for several years 2012, 2011, 2010).
  • Advanced Graphics: In the Spring of 2009, I taught an Advanced Graphics class.

You can find other information on graphics group classes on the Graphics Group Courses Page.

Selected Recent Publications

A (pretty) complete list is available here. Here are some selected recent ones:

  • RAL ‘21 (ICRA ‘21): Single-query Path Planning Using Sample-efficient Probability Informed Trees w/Rakita and Mutlu
  • RAL ‘21 (ICRA ‘21): Corrective Shared Autonomy for Addressing Task Variability w/Hagenow et al.
  • ICRA ‘21: CollisionIK: A Per-Instant Pose Optimization Method for Generating Robot Motions with Environment Collision Avoidance w/Rakita, Shi and Mutlu
  • ICRA ‘21: Recognizing Orientation Slip in Human Demonstrations w/Hagenow et al.
  • TVCG ‘21: embComp: Visual Interactive Comparison of Vector Embeddings w/Heimerl et al.
  • TVCG ‘21 (VAST ‘20): CAVA: A Visual Analytics System for Exploratory Columnar Data Augmentation Using Knowledge Graphs w/Cashman et al.
  • iScience ‘21: CellO: Comprehensive and hierarchical cell type classification of human cells with the Cell Ontology w/Bernstein, Ma and Dewey