Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revisionBoth sides next revision
work:semana_9_de_2022 [2022/03/02 14:16] magsilvawork:semana_9_de_2022 [2022/03/02 14:24] – [Research] magsilva
Line 6: Line 6:
     * The authors conducted a opinion survey with novice testers, addressing the barriers they faced in planning, executing, and analyzing software testing activities. The open questions were analyzed with thematic synthesis, producing a mental map of barriers faced by those novice testers. The results corroborate with those provided in related studies (such as Igor Steinmacher's PhD thesis and related papers). However,  there are some themes and barriers that looks more critical when considering software testing. For instance, technical and onboard barriers looks tougher, as tooling for testing mobile applications changes much faster than for other types of applications, which renders documentation and specific technical knowledge on testing obsolete quicker. For instance, although we often learn and teach test automation using JUnit and Jest, the tests run for mobile applications usually consider different testing frameworks. Specific for mobile applications, there is a barrier regarding platform dependencies and their implications: how to effectively design and execute test cases for so many mobile devices? Well, there are several other barriers, but a look at Figures 1 and 2 of the paper will provide a better picture than further text here.     * The authors conducted a opinion survey with novice testers, addressing the barriers they faced in planning, executing, and analyzing software testing activities. The open questions were analyzed with thematic synthesis, producing a mental map of barriers faced by those novice testers. The results corroborate with those provided in related studies (such as Igor Steinmacher's PhD thesis and related papers). However,  there are some themes and barriers that looks more critical when considering software testing. For instance, technical and onboard barriers looks tougher, as tooling for testing mobile applications changes much faster than for other types of applications, which renders documentation and specific technical knowledge on testing obsolete quicker. For instance, although we often learn and teach test automation using JUnit and Jest, the tests run for mobile applications usually consider different testing frameworks. Specific for mobile applications, there is a barrier regarding platform dependencies and their implications: how to effectively design and execute test cases for so many mobile devices? Well, there are several other barriers, but a look at Figures 1 and 2 of the paper will provide a better picture than further text here.
     * About the implications, the authors suggest to include software testing activities since the beginning of formal Computing education. Nothing new here. TDD could address several barriers identified in the paper. Regarding the platform dependency barrier, we could design test criteria for contextual and hardware elements. Regarding context change, pair programming would be a nice addition, changing the project considered for each day or week. All of that should also foster proper documentation of software testing processes too :-)     * About the implications, the authors suggest to include software testing activities since the beginning of formal Computing education. Nothing new here. TDD could address several barriers identified in the paper. Regarding the platform dependency barrier, we could design test criteria for contextual and hardware elements. Regarding context change, pair programming would be a nice addition, changing the project considered for each day or week. All of that should also foster proper documentation of software testing processes too :-)
-* Pick some papers from ACM Communications to read. +  * Pick some papers from ACM Communications to read. 
-  * The first one was "The Pushback Effects of Race, Ethnicity, Gender, and Age in Code Review". It is somewhat aligned to a theme developed by Mariana's research.  It provides more evidence that "some demographic groups face more code review pushback than others". Considering the role congruity theory, they predicted the evaluation of code reviews of code which was authored by a person that "belongs to a group whose stereotypes do not align with the perceived qualities of a successful programmer or software engineering."  The stereotype was modeled considering three dimensions: gender, race/ethnicity, and age.+    * The first one was "The Pushback Effects of Race, Ethnicity, Gender, and Age in Code Review". It is somewhat aligned to a theme developed by Mariana's research.  It provides more evidence that "some demographic groups face more code review pushback than others" 
 +      * Considering the role congruity theory, they predicted the evaluation of code reviews of code which was authored by a person that "belongs to a group whose stereotypes do not align with the perceived qualities of a successful programmer or software engineering."  The stereotype was modeled considering three dimensions: gender, race/ethnicity, and age
 +      * For dependable variable, they considered "the perception of unnecessary interpersonal conflict in code review while a reviewer is blocking a chance request", which is named as pushback. A pushback is identified by excessive chance requests and approval withholding of code review. That was measured by the number of review rounds, amount of time spent by reviewers, and amount of time spent by the code author addressing the reviewers' concern.