Academic teaching-based research 2017 part 2



The recent 13th China Europe Symposium on Software Engineering Education (CEISEE) held in Athens, Greece 24-25th May 2017 provided the Computing Academic team an opportunity to present some their work in teaching computing. In the picture above going left to right Thomas Butler, Liz Coulter-Smith, Suraj Ajit, Scott Turner and Ryan Edwards.

Three papers have been discussed previously:  https://computingnorthampton.blogspot.co.uk/2017/05/academic-teaching-based-research-2017.html


Details of a further three of the six papers presented can be found below



1. Seven Deadly Sins of Software Flexibility 
Thomas Butler, Mark Johnson

Abstract. As software development techniques evolve, practices emerge which both help and hinder software development. These practices are often identified first by industry experts who work with large codebases in big teams. There are many software development techniques that have been labelled "bad practice" by these industry experts that aren't formally recognised in academia. This paper briefly describes some of these bad practices.



2. ASSESSMENT OF PROGRAMMING MODULES IN SOFTWARE ENGINEERING EDUCATION 
SURAJ AJIT

ABSTRACT Assessment plays a very important role in how students learn. There has been extensive research done on different modes of assessments in higher education. This paper reports on how that research applies to programming modules in Software Engineering education. In particular, the paper reports on the author’s experience and the student perceptions of several assessment modes in programming modules. A questionnaire survey was conducted among 167 undergraduate computing students to get their perceptions on the preferred mode of assessment for programming modules in Software Engineering education. The paper also investigates the effect automated marking systems could have on programming assessments by gathering student perceptions. Student perceptions sought reveal that they would complete more weekly lab exercises if they were to get instant feedback on their submissions from an automated tool. 


3. An Assessment of the Impact on Student Learning via the Use of Role-Play to Simulate Client Interactions within Software Engineering Assessments 
Mark Johnson , Ryan Edwards, Heydon Hancox
Abstract: Assessment plays a very important role in how students learn. There has been extensive research done on different modes of assessments in higher education. This paper reports on how that research applies to programming modules in Software Engineering education. In particular, the paper reports on the author’s experience and the student perceptions of several assessment modes in programming modules. A questionnaire survey was conducted among 167 undergraduate computing students to get their perceptions on the preferred mode of assessment for programming modules in Software Engineering education. The paper also investigates the effect automated marking systems could have on programming assessments by gathering student perceptions. Student perceptions sought reveal that they would complete more weekly lab exercises if they were to get instant feedback on their submissions from an automated tool. 

All views and opinions are the author's and do not necessarily reflected those of any organisation they are associated with. Twitter: @scottturneruon