Adam Cannon

Computer Science

As a member of Columbia’s teaching faculty, Cannon develops and teaches large undergraduate computer science courses for majors and non-majors. He chairs the computer science undergraduate curriculum committee and serves on the SEAS Committee on Instruction. Cannon is also a member of the development committee for the new AP Computer Science Principles Exam. His current research is focused on how to effectively teach computer science to liberal arts students, especially in the humanities.

  • ACM SIGCSE
  • ACM SIGKDD
  • Presidential Award for Outstanding Teaching, Columbia University, 2016.
  • Great Teacher Award, Society of Columbia Graduates, Columbia University, 2016.
  • Department of Computer Science Faculty Teaching Award, Columbia University, 2009.
  • Distinguished Faculty Teaching Award, SEAS Alumni Association, Columbia University, 2002.
  • C. Murphy, R. Powell, K. Parton, A. Cannon, “Lessons Learned from a PLTL-CS Program”, Proceedings of the 42nd ACM SIGCSE Technical Symposium on Computer Science Education, 2011.
  • C. Murphy, E. Kim, G. Kaiser, A. Cannon, “BackStop: A tool for Debugging Runtime Errors”, Proceedings of the 29th ACM SIGCSE Technical Symposium on Computer Science Education, 2008.
  • A.H. Cannon , D.R. Hush, “Multiple Instance Learning using Simple Classifiers”, Proceedings of the International Conference on Machine Learning and Applications, 2004.
  • A.H. Cannon, L. Cowen, “Approximation algorithms for the class cover problem “, Annals of Mathematics and Artificial Intelligence; 40:3(March): 215-223, 2004.
  • A.H. Cannon, J.W. Howse, D.R. Hush, J.C. Scovel, “Learning with the Neyman-Pearson and min-max criteria”, Los Alamos National Laboratory Technical Report No. LAUR-02-2951, 2002.
  • A.H. Cannon, J. W. Howse, D.R. Hush, J.C. Scovel, “Simple Classifiers”, Los Alamos National Laboratory Technical Report No LAUR-03-0193, 2003.
  • A.H. Cannon, J.M. Ettinger, D.R. Hush, J.C. Scovel, “Machine learning with data dependent hypothesis classes”, Journal of Machine Learning Research; 2(Feb): 335-358, 2002.