Trace: 9th Week of 2022

9th Week of 2022

9th Week of 2022

Research

  • While reading Amy Ko's blog, I read about Pyret. Interesting to learning that there is a language that allows the definition of test cases for each function. Actually, it supports not only test case automation, but also code documentation. Test cases are defined at the end of a function definition, after a 'where' clause. Code documentation is defined just after defining the function name and parameters, using a 'doc' clause. Nice and simple.
  • Read the paper “So you’ve graduated college and need to test apps: What barriers might you face?”, from several fellow professors at UFMS.
    • The authors conducted a opinion survey with novice testers, addressing the barriers they faced in planning, executing, and analyzing software testing activities. The open questions were analyzed with thematic synthesis, producing a mental map of barriers faced by those novice testers. The results corroborate with those provided in related studies (such as Igor Steinmacher's PhD thesis and related papers). However, there are some themes and barriers that looks more critical when considering software testing. For instance, technical and onboard barriers looks tougher, as tooling for testing mobile applications changes much faster than for other types of applications, which renders documentation and specific technical knowledge on testing obsolete quicker. For instance, although we often learn and teach test automation using JUnit and Jest, the tests run for mobile applications usually consider different testing frameworks. Specific for mobile applications, there is a barrier regarding platform dependencies and their implications: how to effectively design and execute test cases for so many mobile devices? Well, there are several other barriers, but a look at Figures 1 and 2 of the paper will provide a better picture than further text here.
    • About the implications, the authors suggest to include software testing activities since the beginning of formal Computing education. Nothing new here. TDD could address several barriers identified in the paper. Regarding the platform dependency barrier, we could design test criteria for contextual and hardware elements. Regarding context change, pair programming would be a nice addition, changing the project considered for each day or week. All of that should also foster proper documentation of software testing processes too :-)
  • Pick some papers from ACM Communications to read.
    • The first one was “The Pushback Effects of Race, Ethnicity, Gender, and Age in Code Review”. It is somewhat aligned to a theme developed by Mariana's research. It provides more evidence that “some demographic groups face more code review pushback than others”.
      • Considering the role congruity theory, they predicted the evaluation of code reviews of code which was authored by a person that “belongs to a group whose stereotypes do not align with the perceived qualities of a successful programmer or software engineering.” The stereotype was modeled considering three dimensions: gender, race/ethnicity, and age.
      • For dependable variable, they considered “the perception of unnecessary interpersonal conflict in code review while a reviewer is blocking a chance request”, which is named as pushback. A pushback is identified by excessive chance requests and approval withholding of code review. That was measured by the number of review rounds, amount of time spent by reviewers, and amount of time spent by the code author addressing the reviewers' concern.
      • Results: “Women [code] authors face higher odds of pushback than men; Asian, Black, and Hispanic/Latinx [code] authors face higher odds than White authors; and older [code] authors face higher odds than younger authors.”
      • Confounding factors not accounted for: language spoken by code authors, code quality in the change under review.
    • Next was “Here We Go Again: Why is It Difficult for Developers to Learn Another Programming Language?” (10.1145/3511062). Oddly, the PDF retrieved from ACM DL was incomplete (it has just the first page), but I could read the whole paper using ACM DL reader.
      • Although we often focus on introductory programming classes, there is also the problem of learning to program in a second (or third :-) programming language. That is the issue this paper tackles.
      • The authors used a mixed- method research method, comprising analysis of questions posed at StackOverflow and semistructured interviews. Regarding the questions at StackOverflow, they considered ones that identified correct and incorrect assumptions across pair of languages.
      • There are evidence that cross-language interference occurs: for some language pairs, there are more misconceptions; for others, the knowledge was transferable.
      • Differently from novice programmers, experienced programmers often learn on their own, just in time and try to relate concepts from previously known language. However, those learning methods may be not suitable for new languages. For instance, there may not have a clear mapping for the same concept between languages, lack of documentation and examples for these mappings, or even the language implements a new paradigm that is incompatible with the previous one. Another issue is tooling: programming environments and features may differ considerably, making it harder to program in the new language.
      • The results of the paper are complemented by the introduction by Jonathan Aldrich (10.1145/3511061). We should not assume that it is easy to learn a new programming language. From programming education viewpoint, we should be aware that “old knowledge can either facilitate learning new knowledge or interfere with it.”
work/semana_9_de_2022.txt · Last modified: 2022/03/02 18:32 by magsilva