Grossman has worked in several fields. His early work (1984–1990) was in mathematics, where he developed algorithms in symbolic and numeric computing. In 1989, working with Richard Larson, he showed that
trees have a natural multiplicative structure and are in fact a
Hopf algebra. This algebra, sometimes called the Grossman–Larson algebra, is dual to the Connes-Kreimer algebra, which is one way of organizing the computations required when renormalizing
Feynman diagrams. Working with Peter Crouch, he showed that there are
Runge–Kutta methods that evolve naturally on
Lie groups. From 1990 to 2010, he primarily worked in computer science, specifically, data mining and data intensive computing. With Stuart Bailey and Yunhong Gu, he developed open source software to move large datasets over wide area high performance networks (PTool and the
UDP-based Data Transfer Protocol or UDT). With Yunhong Gu, he also developed Sector/Sphere, a distributed platform for data intensive computing. During this period, he also founded the Data Mining Group, which develops data mining standards, and led the technical working group that developed the
Predictive Model Markup Language (PMML), which is now the dominant standard in analytics. Since 2010, he has primarily focused on data science and its applications to biology medicine, health care and the environment. He developed the first biomedical cloud that was designated as a NIH Trusted Partner, allowing it to interoperate with NIH's controlled access genomic data. He is currently leading the effort to build the NCI Genomic Data Commons, which will host all the genomic and associated clinical data from NIH/NCI funded research projects and clinical trials. He is a faculty member at the University of Chicago and the Director of the Center for Data Intensive Science at the University of Chicago. He is the founder and director of the Open Commons Consortium. ==Entrepreneurial activity==