Intelligent User Interfaces Laboratory

Intelligent User Interfaces Laboratory

We are interested in enabling natural human-computer interaction by combining techniques from machine learningcomputer visioncomputer graphicshuman-computer interaction and psychology. Specific areas that we focus on include: multimodal human-computer interfaces, affective computing, pen-based interfaces, sketch-based applications, intelligent user interfaces, applications of computer vision and machine learning to solving real world problems. Browse through the publications and research pages to get a flavor of IUI@Koc.

Publications

120) Soykan G., Yuret D., T. M. Sezgin“Identity-Aware Semi-Supervised Learning for Comic Character Re-Identification,” arXiv preprint arXiv:2308.09096 ,2023.

119) Sabuncuoglu A., T. M. Sezgin ,“Developing a Multimodal Classroom Engagement Analysis Dashboard for Higher-Education,” Proceedings of the ACM on Human-Computer Interaction, 2023.

118) T. M. Sezgin ,“Online Interpretation of Sketched Drawings,” Interactive Sketch-based Interfaces and Modelling for Design ,2023.

117) Buyukyazi T. ,Korkmaz M. ,T. M. Sezgin ,“HAISTA-NET: Human Assisted Instance Segmentation Through Attention,” arXiv preprint arXiv:2305.03105 ,2023.

116) Sabuncuoğlu A. ,Besevli C. ,T. M. Sezgin ,“Towards Building Child-Centered Machine Learning Pipelines: Use Cases from K-12 and Higher-Education,” arXiv preprint arXiv:2304.09532 ,2023.

115) Sabuncuoğlu A. ,T. M. Sezgin ,“Multimodal Group Activity Dataset for Classroom Engagement Level Prediction,” arXiv preprint arXiv:2304.08901 ,2023.

114) Triantafyllopoulos A. ,Schuller B. W. ,İymen G. ,He X. ,Yang Z. ,Tzirakis P. ,Liu S. ,Mertes S. ,Andre´ E. , T. M. Sezgin et al., An overview of affective speech synthesis and conversion in the deep learning era,” IEEE ,2023.

113) Kesim E. ,Numanoglu T. ,Bayramoglu O. ,Turker B. B. ,Hussain N. ,Yemez Y. ,Erzin E, T. M. Sezgin , “The eHRI database: a multimodal database of engagement in human–robot interactions,” Lang Resources & Evaluation ,2023.

112) Soykan G., Yuret, D., and T. M. Sezgin“A comprehensive gold standard and benchmark for comics text detection and recognition,” arXiv preprint arXiv:2212.14674 ,2022.

111) Akman A. ,Sahillioğlu Y. , T. M. Sezgin“Deep generation of 3D articulated models and animations from 2D stick figures,” Computers & Graphics, Volume 109, Pages 65-74 ,2022.

110) Topal B. B., Yuret D., and T. M. Sezgin, “Domain-adaptive self-supervised pre-training for face & body detection in drawings,” arXiv preprint arXiv:2211.10641 ,2022.

109) Sabuncuoglu A., and T. M. Sezgin“Exploring children’s use of self-made tangibles in programming.” Retrieved March,” arXiv preprint arXiv:2210.06258 ,2022.

108) N. Hussain, E. Erzin, Y. Yemez and T. M. Sezgin,“Training Socially Engaging Robots: Modeling Backchannel Behaviors with Batch Reinforcement Learning,”  IEEE Transactions on Affective Computing ,2022.

107) Sabuncuoglu, A. , and T. M. Sezgin“Kart-ON: An Extensible Paper Programming Strategy for Affordable Early Programming Education,” Proceedings of the ACM on Human-Computer Interaction 6.EICS ,2022.

106) Sabuncuoğlu A. , and T. M. SezginPrototyping Products using Web-based AI Tools: Designing a Tangible Programming Environment with Children,” 6th FabLearn Europe/MakeEd Conference 2022.

105) Çelik B. , Dede E. , and T. M. SezginA Criticism on Popular Sketch Datasets,” 30th Signal Processing and Communications Applications Conference (SIU). IEEE ,2022.

104) Sabuncuoğlu A. , and T. M. Sezgin, “A Critical Evaluation of Recent Deep Generative Sketch Models from a Human-Centered Perspective,” 30th Signal Processing and Communications Applications Conference (SIU). IEEE ,2022.

103) Yanık E. , and T. M. Sezgin“Active Sketch Scene Learning,” Available at SSRN 4084576 ,2022.”

102) Ö. Z. Bayramoğlu, E. Erzin, T. M. Sezgin, and Y. Yemez, “Engagement Rewarded Actor-Critic with Conservative Q-Learning for Speech-Driven Laughter Backchannel Generation,” In Proceedings of the 2021 International Conference on Multimodal Interaction (ICMI ’21) ,2021.

101) A. Sabuncuoğlu, A. E. Yantaç, T. M. Sezgin, “Teaching K-12 Classrooms Data Programming: A Three-Week Workshop with Online and Unplugged Activities,” arXiv preprint arXiv:2110.05303 ,2021.

100) A. Zindancıoğlu and T. M. Sezgin, “Perceptually Validated Precise Local Editing for Facial Action Units with StyleGAN,” arXiv preprint arXiv:2107.12143 ,2021.

99) A. Sabuncuoğlu and T. M. Sezgin“Devoloping Affordable Tangible Programming Education Applications Using Mobile Vision,”  2021 29th Signal Processing and Communications Applications Conference (SIU), 2021, pp. 1-4 ,2021.

98) K. T. Yesilbek, T. M. Sezgin, “Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,” pp. 2142-2149 ,2021.

97) Recep Sinan TumenT. M. Sezgin, “Segmentation and Recognition of Offline Sketch Scenes Using Dynamic Programming” in IEEE Computer Graphics and Applications ,2021.

96) Yesilbek, Kemal Tugrul, and T. Metin Sezgin“Sketch recognition with few examples (vol 69, pg 80, 2017).” COMPUTERS & GRAPHICS-UK 94 (2021): 191-191 ,2021.

95) Z. Bucinca, Y. Yemez, E. Erzin and T. M. Sezgin, “AffectON:Incorporating Affect Into Dialog Generation” in IEEE Transactions on Affective Computing ,2020.

94) A. Sabuncuoğlu, T. M. Sezgin“Kart-ON: Affordable Early Programming Education with Shared Smartphones and Easy-to-Find Materials, ” Proceedings of the 25th International Conference on Intelligent User Interfaces Companion ,2020.

93) A. Akman, Y. Sahillioğlu, T. M. Sezgin” Generation of 3D Human Models and Animations Using Simple Sketches, ” Graphics Interface ,2020.

92)  Sadia, S. E. Emgin, T. M. Sezgin and Ç. Başdoğan, Data-Driven Vibrotactile Rendering of Digital Buttons on Touchscreens,” International Journal of Human-Computer Studies ,2020.

91)  Kurmanbek Kaiyrbekov, T. M. Sezgin, Deep Stroke-Based Sketched Symbol Reconstruction and Segmentation,” IEEE Computer Graphics and Applications ,2020.

90)  Biswas P, Orero P, T. M. Sezgin . “Special Issue on Intelligent Interaction Design,” Artificial Intelligence for Engineering Design, Analysis and Manufacturing.

89)  S. E. Emgin, A. Aghakhani, T. M. Sezgin and C. Basdogan, “HapTable: An Interactive Tabletop Providing Online Haptic Feedback for Touch Gestures,” in IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 9, pp. 2749-2762.

88)  Alexandra Bonnici, Alican Akman, Gabriel Calleja, Kenneth P. Camilleri, Patrick Fehling, Alfredo Ferreira, Florian Hermuth, Johann Habakuk Israel, Tom Landwehr, Juncheng Liu, Natasha M. J. Padfield, T. M. Sezgin and Paul L. Rosin, “ Sketch-based interaction and modeling: where do we stand?“, Artificial Intelligence for Engineering Design, Analysis and Manufacturing .

87)  K. Kaiyrbekov, T. M. Sezgin, “Stroke-based Sketched Symbol Reconstruction and Segmentation,” IEEE Computer Graphics and Applications.

86)  N. Hussain, E. Erzin, T. M. Sezgin, Y. Yemez, “Speech Driven Backchannel Generation using Deep Q-Network for Enhancing Engagement in Human-Robot Interaction,” INTERSPEECH: Annual Conference of the International Speech Communication Association, Graz, Austria.

85)  N. Hussain, E. Erzin, T. M. Sezgin, Y. Yemez, “Batch Recurrent Q-Learning for Backchannel Generation Towards Engaging Agents,” 8th International Conference on Affective Computing and Intelligent Interaction (ACII).

84)  N. Alyuz, T. M. Sezgin, “Interpretable Machine Learning for Generating Semantically Meaningful Formative Feedback,” CVPR, IEEE Conference on Computer Vision and Pattern Recognition, Workshop on Explainable AI, Long Beach, CA.

83)  Erelcan Yanik, T. M. Sezgin, “Active Scene Learning,” arXiv preprint arXiv:1903.02832.

82)  T. M. Sezgin, Ozem Kalay, “Sketch misrecognition correction system based on eye gaze monitoring,” US.

81)  Doğancan Kebüde, Cem Eteke, T. M. Sezgin, Barış Akgün, Communicative Cues for Reach-to-Grasp Motions: From Humans to Robots, Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, July 10-15, Stockholm, Sweden.

80)  B Berker Türker, T. M. Sezgin, Yücel Yemez, Engin Erzin,“Multimodal prediction of head nods in dyadic conversations, 2018 26th Signal Processing and Communications Applications Conference (SIU).

79)  Stéphane Dupont, Ozan Can Altiok, Aysegül Bumin, Ceren Dikmen, Ivan Giangreco, Silvan Heller, Emre Külah, Gueorgui Pironkov, Luca Rossetto, Yusuf Sahillioglu, Heiko Schuldt, Omar Seddati, Yusuf Setinkaya, T. M. Sezgin, Claudiu Tanase, Emre Toyan, Sean Wood, Doguhan Yeke,“VideoSketcher: Innovative Query Modes for Searching Videos through Sketches, Motion and SoundUniversity of Mons.

78)  Erik Marchi, Bjorn Schuller, Alice Baird, Simon Baron-Cohen, Amandine Lassalle, Helen O’Rielly, Delia Pigat, Peter Robinson, Ian Davies, Tadas Baltrusaitis, Ofer Golan, Shimrit Fridenson-Hayo, Shahar Tal, Shai Newman, Noga Meir-Goren, Antonio Camurri, Stefano Piana, Sven Bolte, T. M. Sezgin, Nese Alyuz, Agnieszka Rynkiewicz, Aurelie Baranger, “The ASC-Inclusion Perceptual Serious Gaming Platform for Autistic Children,” IEEE Transactions on Games

77)  B. Türker, E. Erzin, Y. Yemez, and T. M. Sezgin, “Audio-Visual Prediction of Head-Nod and Turn-Taking Events in Dyadic Interactions,” Proc. Interspeech 2018, Hyderabad, India.

76)  L. Devillers and S. Rosset and G. Dubuisson Duplessis and L. Bechade and Y. Yemez and B. B. Turker and T. M. Sezgin and E. Erzin and K. El Haddad and S. Dupont and P. Deleglise and Y. Esteve and C. Lailler and E. Gilmartin and N. Campbell, “Multifaceted Engagement in Social Interaction with a Machine: The JOKER Project,” 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 2018, pp. 697-701

75)  Ç. Çığ and T. M. Sezgin“Gaze-based predictive user interfaces: Visualizing user intentions in the presence of uncertainty,” International Journal of Human-Computer Studies, Vol: 111, pp. 78-91.

74)  Alper, N. H. Riche, F. Chevalier, J. Boy and T. M. SezginVisualization Literacy at Elementary School, Conference on Pen and Touch Technology in Education.

73)  V. Rudakova, N. Lin, N. Trayan, T. M. Sezgin, J. Dorsey and H. Rushmeier,” CHER-ish: A sketch- and image-based system for 3D representation and documentation of cultural heritage sites,” EUROGRAPHICS Workshop on Graphics and Cultural Heritage.

72)  B. Türker, Y. Yemez, T. M. Sezgin, E. Erzin, “Audio-Facial Laughter Detection in Naturalistic Dyadic Conversations,” IEEE Transactions on Affective Computing.

71)  K. T. Yeşilbek, T. M. Sezgin“Sketch Recognition with Few Examples,” Computer & Graphics.

70)  O. C. Altıok, T. M. Sezgin, “Characterizing User Behavior for Speech and Sketch-based Video Retrieval Interfaces,” Proceedings of Expressive 2017, Posters, Artworks, and Bridging Papers, the Eurographics Association, Los Angeles, CA, USA.

69)  B. B. Türker, Z. Buçinca, E. Erzin, Y. Yemez, and T. M. Sezgin, “Analysis of Engagement and User Experience with a Laughter Responsive Social Robot,” Proc. Interspeech , 844-848.

68)  W. Shi, Z. Wang, T. M. Sezgin, J Dorsey, H. Rushmeier, “Material Design in Augmented Reality with In-Situ Visual Feedback,” Proceedings of the Eurographics Symposium on Rendering, Forthcoming,Helsinki, Finland.

67)  Y. Sahillioglu, T. M. Sezgin, “Sketch-based Articulated 3D Shape Retrieval,” IEEE Computer Graphics and Applications.

66)  B. Alper, N. Riche, F. Chevalier, J. Boy, T. M. Sezgin, “Visualization Literacy at Elementary School,” In Proceedings of ACM CHI 2017, Conference on Human Factors in Computing Systems, Denver CO, May 6-11. Honorable Mention Award (top 5% of accepted publications).

65)  O. C. Altıok, K. T. Yesilbek, T. M. Sezgin, “What Auto Completion Tells Us About Sketch Recognition,” In Proceedings of Expressive 2016, Posters, Artworks, and Bridging Papers, the Eurographics Association, Lisbon, Portugal.

64)  Ş. Çakmak, T. M. Sezgin, “Building a Gold Standard for Perceptual Sketch Similarity,” In Proceedings of Expressive 2016, Posters, Artworks, and Bridging Papers, the Eurographics Association, Lisbon, Portugal.

63)  Ç. Çığ, T. M. Sezgin, “Gaze-Based Biometric Authentication: Hand-Eye Coordination Patterns as a Biometric Trait,” In Proceedings of Expressive 2016, Posters, Artworks, and Bridging Papers, the Eurographics Association, Lisbon, Portugal.

62)  Tanase, I. Giangreco, L. Rossetto, H. Schuldt, O. Seddati, S. Dupont, O. C. Altıok, T. M. Sezgin“Semantic Sketch-Based Video Retrieval with Autocompletion,” Proceedings of the 21st International Conference on Intelligent User Interfaces (IUI 2016), ACM, March 7–10, 2016, Sonoma, CA, USA.

61)  L. Rossetto, I. Giangreco, S. Heller, C. Tanase, H. Schuldt, O. Seddati, S. Dupont. T. M. Sezgin, O. C. Altıok, Y. Sahillioglu, “IMOTION – Searching for Video Sequences using Multi-Shot Sketch Queries,” Proceedings of the 22nd International Conference on MultiMedia Modeling, Miami.

60)  L. Rossetto, I. Giangreco, C. Tanase, H. Schuldt, O. Seddati, S. Dupont. T. M. Sezgin, Y. Sahillioglu, “iAutoMotion – an Autonomous Content-based Video Retrieval Engine,” Proceedings of the 22nd International Conference on MultiMedia Modeling, Miami.

59)  L. Devillers, S. Rosset, G. Dubuisson Duplessis, M. A. Sehili, L. Béchade, A. Delaborde, C. Gossart, V. Letard, F. Yang, Y. Yemez, B. B. Türker, T. M. Sezgin, K. El Haddad, S. Dupont, D. Luzzati, Y. Estève, E. Gilmartin, N. Campbell, “Multimodal Data Collection of Human-Robot Humorous Interactions in the JOKER Project,” Proceedings of the 6th International Conference on Affective Computing and Intelligent Interaction, Xi’an, China.

58)  B. Schuller, E. Marchi, S. Baron-Cohen, A. Lassalle, H. O’Reilly, D. Pigat, P. Robinson, I. Davies, T. Baltrusaitis, M. Mahmoud, O. Golan, S. Fridenson, S. Tal, S. Newman, N. Meir, R. Shillo, A. Camurri, S. P., A. Staglianò, S. Bölte, D. Lundqvist, S. Berggren, A. Baranger, N. Sullings, T. M. Sezgin, N. Alyuz, A. Rynkiewicz, K. Ptaszek, K. Ligmann, “Recent developments and results of ASC-Inclusion: An Integrated Internet-Based Environment for Social Inclusion of Children with Autism Spectrum Conditions,” Proc. 3rd International Workshop on Digital Games for Empowerment and Inclusion (IDGEI 2015) held in conjunction with the 20th International Conference on Intelligent User Interfaces (IUI 2015), ACM, Atlanta, US.

57)  Yanik and T. M. Sezgin, “Active Learning for Sketch Recognition. Computer & Graphics,” Accepted for publication.

56)  Arasan, C. Basdogan, and T. M. Sezgin“HaptiStylus: A Novel Stylus Capable of Displaying Movement and Rotational Torque Effects. IEEE Computer Graphics and Applications, Accepted for publication, (2015). Visit the link for visual demonstration

55)  Caglar Tirkaz, Jacob Eisenstein, T. M. Sezgin and Berrin Yanikoglu, Identifying visual attributes for object recognition from text and taxonomy, Computer Vision and Image Understanding.

54)  Madan, A. Kucukyilmaz, T. M. Sezgin, and C. Basdogan, “Recognition of Haptic Interaction Patterns in Dyadic Joint Object Manipulation,” IEEE Transactions on Haptics, Preprint .

53)  L. Rossetto, I. Giangreco, H. Schuldt, S. Dupont, O. Seddati, T. M. Sezgin, Y. Sahillioglu, “IMOTION: A Content-based Video Retrieval Engine,” The 21st International Conference on MultiMedia Modeling, Accepted for publication.

52)  Cig and T. M. Sezgin. “Real-time activity prediction: a gaze-based approach for early recognition of pen-based interaction tasks,” In Proceedings of the workshop on Sketch-based Interfaces and Modeling (SBIM ’15). Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 59-65.

51)  K. T. Yesilbek, C. Sen, S. Cakmak and T. M. Sezgin,”SVM-based sketch recognition: which hyperparameter interval to try?,” In Proceedings of the workshop on Sketch-based Interfaces and Modeling (SBIM ’15). Eurographics Association, Aire-la-Ville,Switzerland, Switzerland, 117-121.

50)  C. Cig, T. M. Sezgin, “Gaze-Based Prediction of Pen-Based Virtual Interaction Tasks,” International Journal of Human-Computer Studies (TUBITAK A), in press, Accepted for publication on 20 September 2014.

49)  C. Cig, T. M. Sezgin, “Gaze-Based Virtual Task Predictor,” In Proceedings of the 7th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction (GazeIn ’14), ACM, (to be presented in Istanbul, Turkey on 16November 2014).

48)  Sezgin, T. M. Sezgin, “Finding the Best Portable Congruential Random Number Generators, Computer Physics Communications.

47)  Arasan, C. Basdogan, T. M. Sezgin, “Haptic Stylus with Inertial and Vibro-Tactile Feedback,” Proceedings of World Haptics Conference.

46)  R. S. Tumen, T. M. Sezgin, “DPFrag: A Trainable Stroke Fragmentation Framework based on Dynamic Programming.” IEEE Computer Graphics and Applications, Sept.-Oct. 2013, Vol. 33 no. 5), pp. 59-67.

45)  Kucukyilmaz, T. M. Sezgin, C. Basdogan, “Intention Recognition for Dynamic Role Exchange in Haptic Collaboration, IEEE Transactions on Haptics, vol. 6, no. 1.

44)  Cig, T. M. SezginNew modalities, new challenges – Annotating sketching and gaze data, In Proceedings of the 21st IEEE Signal Processing and Communications Applications Conference (SIU’13), pp.1-4.

36)  Tirkaz, B. Yanikoglu, T. M. Sezgin,Sketched Symbol Recognition with Few Examples Using Particle Filtering, ACM Symposium on Sketch Based Interfaces and Modeling, Vancouver, Canada.

35)  Kucukyilmaz, T. M. Sezgin, C. Basdogan, “Conveying intentions through haptics in human-computer collaboration,” IEEE World Haptics Conference 2011, Istanbul, Turkey.

34)  R. Arandjelovic, T. M. Sezgin, “Sketch recognition by fusion of temporal and image-based features, Pattern Recognition,” vol: 44, issue: 6, pp 1225-1234.

36)  Tirkaz, B. Yanikoglu, T. M. Sezgin,Sketched Symbol Recognition with Few Examples Using Particle Filtering, ACM Symposium on Sketch Based Interfaces and Modeling, Vancouver, Canada.

35)  Kucukyilmaz, T. M. Sezgin, C. Basdogan, “Conveying intentions through haptics in human-computer collaboration,” IEEE World Haptics Conference 2011, Istanbul, Turkey.

34)  R. Arandjelovic, T. M. Sezgin, “Sketch recognition by fusion of temporal and image-based features, Pattern Recognition,” vol: 44, issue: 6, pp 1225-1234.

33)  S. Afzal, T. M. Sezgin, P. Robinson, “Decoding Emotions from Facial Animations,” ACM / SSPNET International Symposium on Facial Analysis and Animation, Edinburgh, UK.

32)  Y. Gao, Q. Zhao, A. Hao, T. M. Sezgin, N. A. Dodgson, “Automatic construction of 3D animatable facial avatars,” Computer Animation and Virtual Worlds, Vol:21, Issue 3-4, pp (343-354), DOI: 10.1002/cav.340 .

31)  R.Sinan Tumen, M.Emre Acer, T. M.Sezgin, “Feature Extraction and Classifier Combination for Image-based Sketch Recognition,” ACM Symposium on Sketch Based Interfaces and Modeling, Annecy, France.

30)  S. O. Oguz, A. Kucukyilmaz, T. M. Sezgin, C. Basdogan, “Haptic Negotiation and Role Exchange for Collaboration in Virtual Environments.” Haptics Symposium Waltham, Massachusetts, USA.

29)  Y. Gao, T. M. Sezgin, N. Dodgson, “Automatic construction of 3D animatable facial models.” International Conference on Computer Animation and Social Agents, Amsterdam, Netherlands.

28)  T. M. Sezgin, I. Davies, P. Robinson, “Multimodal inference for driver-vehicle interaction.” International Conference on Multimodal Interfaces, Cambridge, MA.

27)  S. Afzal, T. M. Sezgin, Y. Gao, P. Robinson, “Perception of Emotional Expressions in Different Representations Using Facial Feature Points.” IEEE International Conference on Affective Computing and Intelligent Interaction, Amsterdam, Netherlands.

26)  T. M. Sezgin, I. Davies, P. Robinson, “Multimodal inference for driver-vehicle interaction.” Workshop on Multimodal Interfaces for Automotive Applications, International Conference on Intelligent User Interfaces, Sanibel, FL.

25)  Blessing, T. M. Sezgin, R. Arandjelovic, P. Robinson, “A multimodal interface for road design,” Workshop on Sketch Recognition, International Conference on Intelligent User Interfaces, Sanibel, FL.

24)  T. M. Sezgin and R. Davis, “Sketch Recognition in Interspersed Drawings Using Time-Based Graphical Models,” Computers & Graphics Journal.

23)  M. Altinel, E. Arpali, T. M. Sezgin, F. Sezgin, F. Gonenc, A. Yazicioglu, A New Logistic Regression Based Nomogram Developed For Predicting Prostate Biopsy Outcomes in The Turkish Population. 20th Congress of the Turkish Urology Association, Antalya, Turkey.

22)  P. Biswas, T. M. Sezgin and P. Robinson, “Perception Model for People with Visual Impairments.” Proceedings of the 10th International Conf. on Visual Information Systems (LNCS 5188), Salerno, Italy.

2007

21)  T. M. Sezgin and P. Robinson, “Affective Video Data Collection Using an Automobile Simulator.” Second International Conference on Affective Computing and Intelligent Interaction, Lisbon, Portugal.

20)  X Pan, M Gillies, T. M. Sezgin, C. Loscos, “Expressing Complex Mental States Through Facial Expressions.” Second International Conference on Affective Computing and Intelligent Interaction, Lisbon, Portugal.

19)  T. M. Sezgin and R. Davis. “Temporal Sketch Recognition in Interspersed Drawings,”Fourth Eurographics Workshop on Sketch-Based Interfaces and Modeling, University of California, Riverside, CA.

18)  T. M. Sezgin and R. Davis. “Sketch Interpretation Using Multiscale Models of Temporal Patterns,” IEEE Computer Graphics & Applications Journal, Volume: 27, Issue: 1, pp: 28-37.

17)  Dibeklioglu, T. M. Sezgin, E. Ozcan. “A Recognizer for Free-Hand Graph Drawings,” International Workshop on Pen-Based Learning Technologies, Catania, Italy.

16)  T. M. Sezgin, “Overview of Recent Work in Pen-Centric Computing,” Invited Workshop on Pen-Centric Computing, Providence RI.

15)  T. M. Sezgin, “Sketch Interpretation Using Multiscale Models of Temporal Patterns,” In NESCAI’06 Northeast Student Colloquium on Artificial Intelligence., Ithaca NY.

2006

14)  Sezgin and T. M. Sezgin, “On the Statistical Analysis of Feigenbaum Constants,” Journal of the Franklin Institute, vol. 343, pp. 756-758 .

13)  T. M. Sezgin, T. Stahovich, and R. Davis, “Sketch Based Interfaces: Early Processing for Sketch Understanding,” August 2006 SIGGRAPH ’06 SIGGRAPH courses.

12)  T. M. Sezgin, and R. Davis, “Scale-space based feature point detection for digital ink,” August 2006 SIGGRAPH ’06 SIGGRAPH courses.

11)  T. M. Sezgin, T. Stahovich, and R. Davis, “Sketch Based Interfaces: Early Processing for Sketch Understanding,” ACM International Conf. Proc. Series; Vol. 15. Perceptive User Interfaces, Orlando FL.

2005

10)  T. M. Sezgin and R. Davis, “HMM-Based Efficient Sketch Recognition,” In Proceedings of the International Conference on Intelligent User Interfaces (IUI’05), San Diego, CA.

9)    T. M. Sezgin and R. Davis, “Modeling Online Sketching as a Dynamic Process,” In Proceedings of CSAIL Student Workshop ’05 Gloucester, MA.

2004

8)    T. M. Sezgin and R. Davis, “Handling Overtraced Strokes in Hand-Drawn Sketches,” In Proceedings of the AAAI Spring Symposium Series: Making Pen-Based Interaction Intelligent and Natural, Washington DC.

7)    T. M. Sezgin and R. Davis, “Scale-space Based Feature Point Detection for Digital Ink,” In Proceedings of the AAAI Spring Symposium Series: Making Pen-Based Interaction Intelligent and Natural, Washington DC.

2003

6)    T. M. Sezgin, “Recognition efficiency issues for freehand sketche,” Proceedings of the MIT Student Oxygen Workshop. Gloucester, MA.

2002

5)    Randall Davis, Aaron Adler, Christine Alvarado, Tracy Hammond, Rebecca Hitchcock, Michael Oltmans, T. M. Sezgin, Olya Veselova, “Designs for the Future,” MIT Artificial Intelligence Laboratory Annual Abstract.

4)    Tracy Hammond, T. M. Sezgin, Olya Veselova, Aaron Adler, Michael Oltmans, Christine Alvarado, Rebecca Hitchcock, “Multi-domain sketch recognition,” Proceedings of the 2nd Annual MIT Student Oxygen Workshop.

3)    T. M.Sezgin, Randall Davis, “Generating domain specific sketch recognizers from object descriptions,” Student Oxygen Workshop. July.

2)    All Davis, Aaron Adler, Christine Alvarado, Tracy Hammond, Rebecca Hitchcock, Michael Oltmans, T. M. Sezgin, Olya Veselova, “Art and the Future,” MIT Artificial Intelligence Laboratory Annual Abstract. 

2001

Theses

Salih Ozgur Oguz, A Negotiation Model for Affective Visuo-Haptic Communication Between a Human Operator and a Machine, M.S. Thesis. Department of Electrical and Computer Engineering, Koc University (2010).

Projects

  • Tangible Intelligent Interfaces for Teaching Computational Thinking Skills, Scientific & Technological Research Council of Turkey, High Priority Areas R&D Program, (Principal Investigator, Koç University), 2019 – 2021, 1003S
  • Backchannel Feedback Modeling for Human-Computer Interaction ( E.Erzin, Y. Yemez, T.M. Sezgin). Funded by Scientific & Technological Research Council of Turkey, 2018-2020
  • JOKER
    European Commission ERA-NET Program, 2013-2017 The JOKER project will build and develop a generic intelligent user interface providing a multimodal dialogue system with social communication skills including humor, empathy, compassion, charm, and other informal socially-oriented behavior.
  • iMotion
    European Commission ERA-NET Program, 2013-2017 The IMOTION project will develop and evaluate innovative multi-modal user interfaces for interacting with augmented videos. Starting with an extension of existing query paradigms (keyword search in manual annotations), image search (query by example in key frames), IMOTION will consider novel sketch- and speech-based user interfaces.
  • ASC-Inclusion (Sponsored by the European Commission) 2011-2014.

The main goal of this project is to develop a computer software program that will assist children with Autism Spectrum Conditions (ASC) to understand and express emotions through facial expressions, tone-of-voice and body gestures.This software will assist them to understand and interact with other people, and as a result, will increase their inclusion in society. Academic partners include University of Cambridge, United Kingdom, Technische Universität München, Germany, Bar Ilan University, Israel, Koç University, Turkey, and Università degli Studi di Genova, Italy.

  • Intelligent Interfaces for eLearning
    Scientific & Technological Research Council of Turkey, 2013-2016.

The goal of this project is to build the pen-based interfaces for the classroom of the future, and it is funded under the National Priority Areas R&D Program of the Research Council of Turkey (TUBITAK). The scope of the project is not public at the moment. Contact Dr. Sezgin for details.

  • Semi-supervised Intelligent Multimodal Content Translator for Smart TVs
    SANTEZ Programme, Ministry of Science, Industry, and Technology, Turkey 2012-2014

TVs are slowly morphing into powerful set-top computers with internet connections. As such, they slowly assume role of take over roles and functions that were traditionally associated with desktop computers. TV users, for example, can use their TV for browsing the internet. Unfortunately, the vast majority of the content in the internet has been designed for desktop viewing, hence they have to be adapted for viewing on a TV. In this Project, we aim to develop a semi-automatic content retargeting system, which is expected to work with minimal intervention of an expert.

  • Gesture-Based Interfaces
    Koç Sistem R&D Programme, 2011-2013
  • Pen-based Multimodal Intelligent User Interfaces
    Career Grant, Scientific and Technological Research Council of Turkey, 2011-2014
  • Educational Sketch-Based Intelligent Interfaces
    Turk Telekom R&D Programme, 2010-2013
  • Interactive Intelligent Sketching Board
    KOLT Teaching Innovation Grant, 2010-2011
  • Deep Green: Commander’s Associate
    DARPA/BAE/SIFT (British Aerospace/Smart Information Flow Technologies), 2008-2009
  • Deep Green: Commander’s Associate
    DARPA/SAIC (Science Applications International Corporation), 2008-2009
  • Early Processing for Sketch Recognition

Freehand sketching is a natural and crucial part of everyday human interaction, especially important in design, yet is unsupported by current design automation software. We are working to combine the flexibility and ease of use of paper and pencil with the processing power of a computer, to produce a design environment that feels as natural as paper, yet is considerably smarter. One of the most basic steps in accomplishing this is converting the original digitized pen strokes in the sketch into the intended geometric objects. We have implemented a system that combines multiple sources of knowledge to provide robust early processing for freehand sketching.

Selected Publications

  • Tevfik Metin Sezgin. Feature Point Detection and Curve Approximation for Early Processing of Free-Hand Sketches.Master’s Thesis. May 2001. Department of EECS, MIT.
  • Tevfik Metin Sezgin and Randall Davis. Handling Overtraced Strokes in Hand-Drawn Sketches. In Making Pen-Based Interaction Intelligent and Natural . 2004.
    BibTeX ][ PDF ][ PS ]
  • Tevfik Metin Sezgin and Randall Davis. Scale-space Based Feature Point Detection for Digital Ink. In Making Pen-Based Interaction Intelligent and Natural . 2004.
    BibTeX ][ PDF ][ PS ]
  • Tevfik Metin Sezgin, Thomas Stahovich, and Randall Davis. Sketch Based Interfaces: Early Processing for Sketch Understanding.Workshop on Perceptive User Interfaces, Orlando FL . 2001.
    BibTeX ][ PDF ][ PS ]
  • Tevfik Metin Sezgin and Randall Davis. Early Sketch Processing with Application in HMM Based Sketch Recognition. In MIT Computer Science and Artificial Intelligence Laboratory Technical Report AIM-2004-016, July 2004.
    PDF ][ PS ]

 

Sketch Recognition

A major portion of pen-centric research has revolved around the goal of enabling natural human-computer interaction. We believe progress in recognition techniques is critical to achieving the goal of natural sketch-based interfaces. We need to improve over the existing recognition algorithms in terms of efficiency and recognition accuracy. Our work in recognizing sketches using temporal patterns that naturally appear in online sketching contributes toward addressing these algorithmic issues.

Our analysis of real sketch examples from target user groups has revealed that individuals have personal sketching styles manifested in the form of patterns in temporal stroke orderings (i.e., people tend to use predictable stroke orderings during sketching). Based on this finding, we have developed two algorithms that use ensembles of Hidden Markov Models (HMMs) and Dynamic Bayesian Networks (DBNs) to learn temporal patterns in stroke orderings and perform efficient recognition.

Selected Publications

  • Tevfik Metin Sezgin and Randall Davis.Temporal Sketch Recognition in Interspersed Drawings. Fourth Eurographics Workshop on Sketch-Based Interfaces and Modeling, University of California, Riverside, CA, August 2-3, (2007).
  • Tevfik Metin Sezgin Overview of Recent Work in Pen-Centric Computing: Vision and Research Summary. In Invited Workshop on Pen-Centric Computing Research, Brown University, March 26-28 2007.
  • Tevfik Metin Sezgin and Randall Davis. Sketch Interpretation Using Multiscale Models of Temporal Patterns. In IEEE Journal of Computer Graphics and Applications,Volume: 27,  Issue: 1, pp: 28-37, 2007.
    PDF ][ PS ]
  • Tevfik Metin Sezgin and Randall Davis. HMM-Based Efficient Sketch Recognition. In Proceedings of the International Conference on Intelligent User Interfaces(IUI’05). New York, New York, January 9-12 2005.
    BibTeX ][ Extended PDF ][ Extended PS ][ PDF ][ PS ][ PPT ]
  • Tevfik Metin Sezgin and Randall Davis. Modeling Sketching as a Dynamic Process. In CSW ’05 Gloucester, MA . 2005.
    BibTeX ][ PDF ][ PS ]
  • Tevfik Metin Sezgin and Randall Davis. Efficient search space exploration for sketch recognition. In MIT Computer Science and Artificial Intelligence Laboratory Annual Research Abstract. 2004.
    BibTeX ][ PDF ][ PS ]
  • Tevfik Metin Sezgin and Randall Davis. Early Sketch Processing with Application in HMM Based Sketch Recognition. In MIT Computer Science and Artificial Intelligence Laboratory Technical Report AIM-2004-016, July 2004.
    PDF ][ PS ]
  • Tevfik Metin Sezgin. Generic and HMM based approaches to freehand sketch recognition. Proceedings of the MIT Student Oxygen Workshop. 2003.
    BibTeX ][ PDF ][ PS ]
  • Tevfik Metin Sezgin. Recognition efficiency issues for freehand sketches.Proceedings of the MIT Student Oxygen Workshop. 2003.
    BibTeX ][ PDF ][ PS ]
  • Tracy Hammond, Metin Sezgin, Olya Veselova, Aaron Adler, Michael Oltmans, Christine Alvarado, and Rebecca Hitchcock. Multi-Domain Sketch Recognition.Proceedings of the 2nd Annual MIT Student Oxygen Workshop . 2002.
  • Tevfik Metin Sezgin. Generating Domain Specific Sketch Recognizers From Object Descriptions.Proceedings of the MIT Student Oxygen Workshop. 2002.
    BibTeX ][ PDF ][ PS ]
  • Christine Alvarado, Metin Sezgin, Dana Scott, Tracy Hammond, Zardosht Kasheff, Michael Oltmans, and Randall Davis. A Framework for Multi-Domain Sketch Recognition. In MIT Artificial Intelligence Laboratory Annual Abstract . September 2001.
    BibTeX ][ PDF ][ PS ]

 

Readily Deployable Sketch-Based Applications

Part of our current research effort aims to construct and evaluate sketch-based applications for domains where recognition is robust enough to allow the deployment of these systems in real settings. Unlike my work in developing sketch recognition algorithms, in this line of research, the emphasis is on building systems that can readily be adopted by the intended audience and immediately integrated into their work flow. Therefore, the focus is on the construction and evaluation of pen-based interfaces for domains that are simple enough to yield reasonably high recognition rates with the current state of art in sketch recognition. Graphs and Course of Action Diagrams are two such domains.

Graph Manipulation

Along with collaborators, we developed and evaluated an application that allows computer science students to draw and interact with directed and undirected graphs using a pen-based interface. The recognition engine of this application used a variety of methods including Kohonen networks, iterative closest point and parallel sampling algorithms for recognizing user-drawn graphs and digits. Watch these clips to see this tool in action: [Clip1] [Clip 2]. You’ll need the Camtasiacodec to play the videos.

Course of Action Diagram Recognition

Course of action diagrams are drawings constructed by military commanders to depict military scenarios (e.g., locations and movements of friendly and enemy units). They are typically drawn by hand on layers of acetate overlaid on top of maps. We are currently working on systems that can recognize course of action diagrams as they are drawn. This is a three year long project and at the moment there is funding for two PhD students. We’re also looking for summer interns to work on related projects.

This project is in collaboration with Dr. Hammond from Texas A&M University and Dr. Alvarado from Harvey Mudd College, USA.

Selected Publications

  • Blessing, T. M. Sezgin, R. Arandjelovic, P. Robinson, A multimodal interface for road design.Workshop on Sketch Recognition, International Conference on Intelligent User Interfaces, Sanibel, FL, February (2009).
  • Hamdi Dibeklioglu, Tevfik Metin Sezgin and Ender Ozcan A Recognizer for Free-Hand Graph Drawings. In International Workshop on Pen-Based Learning Technologies, Catania, Italy, May 24-25, 2007.
    Extended PDF ][ Extended PS ][ PDF ][ PS ]
  • Tevfik Metin Sezgin Overview of Recent Work in Pen-Centric Computing: Vision and Research Summary. In Invited Workshop on Pen-Centric Computing Research, Brown University, March 26-28 2007.
    PDF ][ PS ]

 

Affective Computing and Applications

In collaboration with colleagues from University of Cambridge, we are exploring ways of animating avatars to display emotions as people do. Our primary interest is in applications of machine learning for inferring people’s affective state and affective animation of avatars.

Driver Monitoring and Intelligent Interfaces for Automobiles

Automatic recognition of drivers’ affective state has received interest as a potential source of information for in-car driver monitoring systems. Although there have been studies describing the use of relatively invasive physiological measurements and expensive eye tracking information, facial appearance data has not been explored as much. We have investigated ways of inferring physical and mental states of drivers from video data. We compiled a video corpus by recording drivers subjected to a set of controlled driving conditions in a driving simulator. We are currently exploring ways of automatically processing the video data to facilitate higher fidelity annotation and mental state recognition. We’re looking for MS and PhD students to work on related projects.

Publications

  • Afzal,T. M. Sezgin, Y. Gao, P. Robinson, Perception of Emotional Expressions through Facial Feature Points.International Conference on Affective Computing and Intelligent Interaction, Amsterdam, Netherlands, September 10-11, (2009).
  • Gao,T. M. Sezgin, N. Dodgson., Automatic construction of 3D animatable facial models.International Conference on Computer Animation and Social Agents, Amsterdam, Netherlands, June 17-19, (2009).
  • M. Sezgin, I. Davies, P. Robinson, Multimodal inference for driver-vehicle interaction.Workshop on Multimodal Interfaces for Automotive Applications, International Conference on Intelligent User Interfaces, Sanibel, FL, February (2009).

Tevfik Metin Sezgin, Peter Robinson, Affective Video Data Collection Using an Automobile Simulator. Second International Conference on Affective Computing and Intelligent Interaction, Lisbon, Portugal, September 12-14, (2007).

Xueni Pan, Marco Gillies, Tevfik Metin Sezgin, Celine Loscos, Expressing Complex Mental States Through Facial Expressions. Second International Conference on Affective Computing and Intelligent Interaction, Lisbon, Portugal, September 12-14, (2007).

People

Gallery

Videos

Links

Blog

Contact

Address

Rumelifeneri Yolu 34450 Sarıyer / İstanbul