<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Research and Evaluation | BIRD UX</title>
	<atom:link href="https://birdux.studio/en/category/user-research-evaluation/feed/" rel="self" type="application/rss+xml" />
	<link>https://birdux.studio/en</link>
	<description>Boutique UX Agentur in Mannheim und Berlin</description>
	<lastbuilddate>Mon, 02 Jun 2025 07:54:14 +0000</lastbuilddate>
	<language>en-GB</language>
	<sy:updateperiod>
	hourly	</sy:updateperiod>
	<sy:updatefrequency>
	1	</sy:updatefrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>A petition for a digital inclusivity countdown</title>
		<link>https://birdux.studio/en/a-petition-for-a-digital-inclusion-countdown/</link>
		
		<dc:creator><![CDATA[Jennifer]]></dc:creator>
		<pubdate>Mon, 22 Apr 2024 17:01:40 +0000</pubdate>
				<category><![CDATA[Research und Evaluation]]></category>
		<guid ispermalink="false">http://neu.thegeekettez.com/?p=21932</guid>

					<description><![CDATA[A few weeks ago, we attended an event organised by Digital Media Women Rhein-Neckar and Business Professional Women Mannheim-Ludwigshafen, a "Future Talk" panel on the digital gender gap, which describes, among other things, the different levels of digitalisation between men and women (read more at initiatived21.de). The panel featured Maren Heltsche, co-founder of speakerinnen.org and special representative for the [...]]]></description>
										<content:encoded><![CDATA[<p>A few weeks ago, we were at an event organised by the <a href="https://digitalmediawomen.de/quartiere/quartier-rhein-neckar/" target="_blank" rel="noopener">Digital Media Women Rhine-Neckar</a> and the <a href="https://www.bpw-mannheim-ludwigshafen.de/" target="_blank" rel="noopener">Business Professional Women Mannheim-Ludwigshafen</a>a "Future Talk" panel on the digital gender gap, which describes, among other things, the different degree of digitalisation of men and women (read more <a href="https://initiatived21.de/publikationen/digital-gender-gap" target="_blank" rel="noopener">initiatived21.de</a>). On the panel were <a href="https://www.frauenrat.de/verband/vorstand/sonderbeauftragte-des-vorstands/maren-heltsche/" target="_blank" rel="noopener"><strong>Maren Heltsche</strong></a>, co-founder of <a href="https://speakerinnen.org/" target="_blank" rel="noopener">speakers.org</a> and special representative for the policy field of digitalisation in the German Women's Council, and <a href="https://johannah-illgner.de/ueber-mich/" target="_blank" rel="noopener"><strong>Johanna Illgner</strong></a>, city councillor for the city of Heidelberg and co-founder of Plan W - Agency for Strategic Communication.</p>



<p>During the hour-long discussion, Maren and Johanna explored the question of why the digital gender gap exists and discussed possible solutions. The discussion centred on inequality within the workforce in digital teams, but also on the so-called "digital gender data gap". The digital gender data gap refers to the state of the data that feeds algorithms and is used to train AIs. These are currently quite one-sided and disadvantage marginalised groups. In the course of the discussion, these topics led us to the question: If we have an online accessibility countdown, why don't we actually have a countdown for digital inclusivity, an online inclusivity countdown?</p>



<h3 class="wp-block-heading">What is the online accessibility countdown?</h3>



<p>The online accessibility countdown refers to a law passed in 2021 that, in short, requires digital services to be accessible to people with disabilities in the EU. Annika Brinkmann then put a page online that shows how many days are left until all websites in the EU must be accessible. The page describes the online accessibility countdown: "The European Accessibility Act (EAA) comes into force on 28 June 2025. By then, the websites of companies with more than 10 employees and more than EUR 2 million in annual turnover must also be accessible or those that are published after that date."<br>However, the online accessibility countdown is not the topic of this article - it is only a template for us, an inspiration for a truly accessible and inclusive digitalised world.</p>



<h2 class="wp-block-heading">Why do we need a digital inclusivity countdown?</h2>



<h3 class="wp-block-heading">Reason #1 If we want to become progressive, we must not cling to the status quo</h3>



<p>There are many aspects of the digital gender gap that can be focussed on. One is, for example <strong>Homogeneous, largely male-dominated teams within the tech industry</strong>. There is an inequality within the workforce of work teams in the tech industry, with the proportion of women in the industry varying somewhat depending on geographical location. According to <a href="https://www.womentech.net/en-de/women-in-tech-stats" target="_blank" rel="noopener">www.womentech.net</a> it will take between 53 years (in Latin America and the Caribbean) and 189 years (in East Asia and the Pacific) to close this gap. This has a direct and indirect impact on the products and services that are developed, as it creates a relatively one-sided perspective.</p>



<p>Although the above figures are shocking, our petition for a digital inclusivity countdown will focus on the so-called <strong>Digital Gender Data Gap</strong> to close. The reason for this is that the rapid development of AI threatens a regression or standstill instead of an improvement in inclusivity in digital services and products, which requires immediate action.</p>



<p>Currently, there is a very high probability that data relied on by algorithms and used to train AIs is biased, sexist and therefore harmful or discriminatory towards marginalised groups. This state of affairs is historically conditioned. During the panel discussion, Maren Hetschle rightly said that <strong>in an unfair world with unfair data, fair systems seem utopian</strong>.</p>



<p>An additional problem: training data is not regulated. One of the reasons for this is economic. The business models and competitive advantage of most AI companies would be destroyed if it were possible to gain access to the data. If it now appears that we have agreed that regulation at international level is impossible solely in the interests of the success of AI companies, this is only half the truth. Because the question remains: who, i.e. which authority, should be responsible for regulating data internationally?</p>



<p>If you now believe that it will take some time before the consequences of this development take on ugly proportions, you should take a look at a recent UNESCO report that already finds evidence of regressive gender stereotypes in generative AI (<a href="https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes" target="_blank" rel="noopener">www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes</a>). AI, which children and young people, for example, are already using to do their homework.</p>



<h3 class="wp-block-heading">Reason #2: Discrimination means missing out on (economic) opportunities.</h3>



<p>27 % of people living in the EU live with one or more disabilities. While not all people with disabilities are impaired when interacting with digital products, the number of people affected by non-inclusive systems is large enough to bring the EU to its senses and enact the European Accessibility Act (EAA).</p>



<p>It is therefore all the more surprising that there is still no countdown to online inclusivity, given that women make up the majority of the EU's population with an average of almost 51%. Based on these facts, it seems downright absurd that we are not even trying to regulate data models that are used in algorithms. In fact, they discriminate against most people.</p>



<p>No matter what role women, i.e. half of the world's population, play in your daily work: whether they are customers<em>, </em>Employees, patients<em> </em>or voters, if you rely on discriminatory data when interacting with these people, you will miss out on opportunities and possibilities, e.g. to attract customers, employees or voters.</p>



<h3 class="wp-block-heading">Reason #3: Discrimination is expensive</h3>



<p>If the argument of discrimination against more than 50 % population seems too weak, the financial impact of discrimination could be a more convincing argument.</p>



<p>A costly development fiasco such as the Amazon Recruiting Tool or the Austrian Labour Market Service's Labour Market Opportunity Assistance System (AMAS) (Stefanie's talk at WUD Berlin addresses this tool: <a href="https://youtu.be/PndW3UR_p1s?si=uRhYHQJpB-DSrp2k&amp;t=483" target="_blank" rel="noopener">youtu.be/PndW3UR_p1s?si=uRhYHQJpB-DSrp2k&amp;t=483</a>) could have been avoided if the data below had been analysed and adapted before the tools were developed.</p>



<p>If you think that this type of discrimination won't have a negative impact on your budget because you're not developing tools, for example, then you're wrong. If, for example, you use the internet to advertise to your target group, discriminatory algorithms could prevent you from reaching the people you want. Find out more: <a href="https://algorithmwatch.org/en/automated-discrimination-facebook-google/" target="_blank" rel="noopener">algorithmwatch.org/en/automated-discrimination-facebook-google</a>.</p>



<p>We are sure that there are many more reasons for an online inclusivity countdown, but listing them will only be relevant if we find an answer to the following question: How do we design non-discriminatory systems and services when our tools are flawed?</p>



<p>The current international consensus seems to be that data cleansing and enrichment, as well as the regulation of training data, are not feasible. We will therefore have to develop our own solutions.</p>



<h2 class="wp-block-heading">Designing just systems in an unjust world</h2>



<h3 class="wp-block-heading">#1 Get to know your data</h3>



<p>Whether you are basing a design on analytics data or training an AI with data sets, take a close look at your data set and put it to the test. Find out how, when and by whom the data was collected. Ask what kind of data was collected and, perhaps more importantly, what was not collected. If you don't do this, you are more or less flying blind. This would be comparable to analysing the performance of a website that has not been cleaned of spam traffic.</p>



<h3 class="wp-block-heading">#2 Get to know your users</h3>



<p>User research can do more than just inform user-centred design. It also has the potential to complement and challenge existing data. This is important because data is rarely neutral and unbiased. Real insights can only be secured by asking the people you are designing for. To prevent these insights from becoming another source of biased data, they should be thoroughly documented and explain, for example, how design decisions are influenced by them.</p>



<h3 class="wp-block-heading">#3 Diversify your teams</h3>



<p>Every person is ultimately the result of "Nature and Nurture". Different people, shaped by different environments, ask different questions and develop different solutions. This is an advantage that should definitely be utilised.</p>



<p><strong>While we can do nothing but hope that one day the online inclusivity countdown will start ticking, we won't sit idly by and wait for that day! If you need help creating inclusive design solutions, please get in touch! The first Geekettez claim was "We Design For Humans". We have remained true to this claim since 2012. #PowerToThePeople</strong></p>



<p></p>



<h4 class="wp-block-heading">Further links and sources:</h4>



<ul class="wp-block-list">
<li>Initiative D21: <a href="https://initiatived21.de/publikationen/digital-gender-gap" target="_blank" rel="noopener">initiatived21.de/publications/digital-gender-gap</a></li>



<li>Online Accessibility Countdown by Annika Brinkmann: <a href="https://online-accessibility-countdown.eu/" target="_blank" rel="noopener">online-accessibility-countdown.eu</a></li>



<li>Women Tech Network: <a href="https://www.womentech.net/en-de/women-in-tech-stats" target="_blank" rel="noopener">www.womentech.net/en-de/women-in-tech-stats</a></li>



<li>Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes: <a href="https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes" target="_blank" rel="noopener">www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes</a></li>



<li>Facts and figures on disability in the EU: <a href="https://www.consilium.europa.eu/de/infographics/disability-eu-facts-figures/" target="_blank" rel="noopener">www.consilium.europa.eu/de/infographics/disability-eu-facts-figures</a></li>



<li>Percent female population - Country rankings: <a href="http://www.theglobaleconomy.com/rankings/percent_female_population/Europe/" target="_blank" rel="noopener">www.theglobaleconomy.com/rankings/percent_female_population/Europe</a></li>



<li>Amazon's "holy grail" recruiting tool was actually just biased against women: <a href="https://qz.com/1419228/amazons-ai-powered-recruiting-tool-was-biased-against-women" target="_blank" rel="noopener">qz.com/1419228/amazons-ai-powered-recruiting-tool-was-biased-against-women</a></li>



<li>Austria's employment agency rolls out discriminatory algorithm, sees no problem: <a href="https://algorithmwatch.org/en/austrias-employment-agency-ams-rolls-out-discriminatory-algorithm/" target="_blank" rel="noopener">algorithmwatch.org/en/austrias-employment-agency-ams-rolls-out-discriminatory-algorithm</a></li>



<li>"UUX - but inclusive please" - Stefanie's talk at WUD 2021, Berlin: <a href="https://youtu.be/PndW3UR_p1s?si=uRhYHQJpB-DSrp2k&amp;t=483" target="_blank" rel="noopener">https://youtu.be/PndW3UR_p1s?si=uRhYHQJpB-DSrp2k&amp;t=483</a></li>



<li>Automated discrimination: Facebook uses gross stereotypes to optimise ad delivery: <a href="https://algorithmwatch.org/en/automated-discrimination-facebook-google/" target="_blank" rel="noopener">algorithmwatch.org/en/automated-discrimination-facebook-google</a></li>
</ul>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Making informed decisions and prioritising requirements with personas and journey maps</title>
		<link>https://birdux.studio/en/personas-and-journeymaps/</link>
		
		<dc:creator><![CDATA[Stefanie]]></dc:creator>
		<pubdate>Wed, 05 Oct 2022 11:21:29 +0000</pubdate>
				<category><![CDATA[Experience Design]]></category>
		<category><![CDATA[Research und Evaluation]]></category>
		<category><![CDATA[UX Strategy]]></category>
		<category><![CDATA[customerexperience]]></category>
		<category><![CDATA[cx]]></category>
		<category><![CDATA[userexperince]]></category>
		<category><![CDATA[ux]]></category>
		<guid ispermalink="false">http://neu.thegeekettez.com/?p=13127</guid>

					<description><![CDATA[Research-based personas and the resulting user/customer journeys simplify requirements and product development enormously, as they help to identify optimisation potential. This in turn helps to make well-founded decisions about product functions, interactions and navigation paths, as well as facilitating the prioritisation of functions. There are often many different opinions within a team and company as to who the users [...]]]></description>
										<content:encoded><![CDATA[<p><strong>Research-based personas and the resulting user/customer journeys simplify requirements and product development enormously, as they help to identify optimisation potential. This in turn helps to make well-founded decisions about product functions, interactions and navigation paths. <strong>to meet</strong></strong> <strong>and to facilitate the prioritisation of functions.</strong></p>



<p>There are often very different opinions within a team and company as to who the users or customers are and what their wishes, needs, questions and goals are. As a result, the term "<em>User</em>" or the "<em>Users</em>". However, these "users" are very malleable and can adapt perfectly to the opinions and assumptions of the person who is talking about the "user(s)". Alan Cooper, the "inventor" of personas, called this phenomenon "<em>The elastic user"</em>.</p>



<figure class="wp-block-image aligncenter size-full"><a href="https://birdux.studio/wp-content/uploads/2022/10/elastic-user-1.png"><img decoding="async" src="https://birdux.studio/wp-content/uploads/2022/10/elastic-user-1.png" alt="" class="wp-image-13130"/></a><figcaption class="wp-element-caption"><em>Fig. 1: "The elastic user" by Alan Cooper in About Face 3. adapts perfectly to the respective product team.</em></figcaption></figure>



<p>Instead of being decided and prioritised from the user's perspective, which functions are planned or which features are implemented and when, decisions are made on the basis of opinions and assumptions or even from the perspective of technology.</p>



<p>The result is systems that are developed without users in mind and may be overloaded, offer no real added value, fail to stand up to competition or provide nothing new in terms of innovation.</p>



<p>One solution here is <em>research-based personas</em>. Research-based personas can serve as the starting point for journey maps, which in turn reveal the optimisation and innovation potential of products and services. This makes requirements and product development and the prioritisation of functions much easier and is based on well-founded decisions rather than opinions.</p>



<h2 class="wp-block-heading"><strong>What are personas?</strong></h2>



<p>Personas are prototypical descriptions of representative users. They are created based on qualitative interviews and, if necessary, contextual observations. In short, personas are a way of summarising user research results. They are neither real people nor an "average" user, nor should they be based on stereotypes. They are <em>User models</em>.</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="609" height="388" src="https://birdux.studio/wp-content/uploads/2023/08/persona-example.png" alt="Persona profile example" class="wp-image-21583" srcset="https://birdux.studio/wp-content/uploads/2023/08/persona-example.png 609w, https://birdux.studio/wp-content/uploads/2023/08/persona-example-480x306.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 609px, 100vw" /></figure>



<p><em>Fig. 2: This is what a persona profile could look like: The persona has a name, goals, preferences, concerns and questions</em></p>



<p></p>



<p>But what does that even mean - "model"? George Box, a well-known statistician, once described this in his famous saying "<em>All models are wrong, but some are useful</em>" in a nutshell. Burnham and Anderson (2002) explained models as "<em> (....) a simplification or approximation of reality (...) they therefore do not reflect the whole of reality.</em>" In short, models are used to represent complex things such as brain research, the universe or underground maps with a useful abstraction.</p>



<p><strong>Personas are user models</strong></p>



<p>Why do we attach so much importance to this? We are convinced that it is very important to make it clear that personas as <em>Models </em>because this means that we are aware at all times that they do not reflect reality 100%, but rather a <em>Abstraction</em> of this complex reality. They should be seen as tools for communicating research results and help to ensure a shared understanding of these results within the team or organisation.</p>



<p>This abstraction leads to the question: Why do we use models and not simply use reality?</p>



<p>The BVG underground map is an excellent example of a model. The map contains the most important information to get to the desired destination: clear names and colours for each line, the order of the stations on each line, the transfer stations between the lines, and so on. But the details that are less important to us as passengers - such as the depth of the individual tunnels, the exact distance between the stations - are ignored. A civil engineering company, for example, needs different models for this, because it wants to do something other than simply travelling from A to B. For us as passengers, such a level of detail in the plan would not be necessary, it could very probably even make it extremely difficult for us to read and make the plan incomprehensible to us.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="781" height="552" src="https://birdux.studio/wp-content/uploads/2023/08/modellbsp-bvg.png" alt="" class="wp-image-21587" srcset="https://birdux.studio/wp-content/uploads/2023/08/modellbsp-bvg.png 781w, https://birdux.studio/wp-content/uploads/2023/08/modellbsp-bvg-480x339.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 781px, 100vw" /></figure>



<p><em>Fig.3: A model of the BVG underground network with meaningful abstractions of reality</em></p>



<p></p>



<p>Even models of the universe do not explain how the universe works in "reality", but they make the concept tangible for non-astrophysicists and save us from having to pore over specialised journals to gain an understanding of the subject area.</p>



<p>Similarly, user research can be used to create descriptive models of the users. These models help to communicate the complexity of the interview data to the team in a more understandable way. Models show how things work in a consumable form that is accessible and easy to communicate. It is easier to communicate a few personas than to read all the ethnographic reports.</p>



<h4 class="wp-block-heading"><strong>Personas inform the ACTUAL state of a user or customer journey</strong></h4>



<p>The research-based personas provide insights into questions, needs and concerns, as well as typical usage scenarios. From this information, it is possible to derive an actual state of the customer journey - with all its positive and negative experiences - the <em>Pain Points</em>. These pain points are exactly what is of great interest - because this is where the potential for optimisation and innovation lies.</p>



<h2 class="wp-block-heading"><strong>What are journey maps?</strong></h2>



<p>A journey map is a representation of all interaction points (or: <em>Touchpoints</em>) between your users/customers (represented by the personas) and your product or service. Journey maps therefore represent the <em>current </em>user experience and are ideal for uncovering optimisation potential. The personas based on the interview data are used to determine the respective interaction points/touchpoints. The current state of the user experience is then "mapped" to these touchpoints. These can be both positive and negative experiences such as worries, fears, questions or concerns that were identified in interviews. This mapping reveals where the potential for optimisation lies: often precisely in the "gaps" in a positive user experience.</p>



<p>All of this is summarised in an overview so that it becomes clear exactly where the greatest need for optimisation exists.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" src="https://birdux.studio/wp-content/uploads/2023/08/journeymap-example.png" alt="Example of a Journey Map" class="wp-image-21589" width="597" height="425" srcset="https://birdux.studio/wp-content/uploads/2023/08/journeymap-example.png 597w, https://birdux.studio/wp-content/uploads/2023/08/journeymap-example-480x342.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 597px, 100vw" /></figure>



<p><em>Fig. 4</em>: <em>An example of what a user/customer journey map can look like</em></p>



<p>This description of the user experience with the product/service is then a valuable starting point for the following idea phase (<em>Future </em>scenarios/maps) as well as for the prioritisation of functions and further product development.</p>



<h2 class="wp-block-heading"><strong>Conclusion:</strong></h2>



<p>Research-based personas and the resulting journey maps are excellent tools for subsequently developing well-founded ideas for exploiting optimisation potential. This in turn forms a solid basis for user-centred and prioritised requirement and product development, because with a cross-departmental understanding of your users and customers, you ensure that the entire team has a common understanding of the users and their goals and needs and make informed decisions about what should be implemented and when, thus reducing your cost risk in a new or redesign process.</p>



<p></p>



<p><strong>Literature</strong></p>



<ul class="wp-block-list">
<li>Alan Cooper , Robert Reimann , Dave Cronin, About face 3: the essentials of interaction design, John Wiley &amp; Sons, Inc., New York, NY, 2007</li>



<li>Burnham, K. P.; Anderson, D. R. (2002), Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (2nd ed.), Springer-Verlag</li>
</ul>



<p></p>



<p><strong>Picture:</strong> Header Photo by <a href="https://unsplash.com/@kaleidico?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener">Kaleidico</a> on <a href="https://unsplash.com/s/photos/design-team?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener">Unsplash</a></p>



<p></p>



<p></p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Ethical aspects and principles of usability tests</title>
		<link>https://birdux.studio/en/ethical-aspects-usability-test/</link>
		
		<dc:creator><![CDATA[Stefanie]]></dc:creator>
		<pubdate>Tue, 27 Sep 2022 09:00:27 +0000</pubdate>
				<category><![CDATA[Research und Evaluation]]></category>
		<category><![CDATA[usability testing]]></category>
		<guid ispermalink="false">http://neu.thegeekettez.com/?p=13110</guid>

					<description><![CDATA[As part of the psychology degree programme, students have to complete a certain number of so-called subject hours for admission to the final thesis. This is a total of around 30 hours and its purpose is to familiarise you with research from the perspective of the participants before you plan and carry out psychological studies yourself. Although this is time-consuming, it is really valuable, as you learn some [...]]]></description>
										<content:encoded><![CDATA[<p>As part of the psychology degree programme, students have to complete a certain number of so-called subject hours for admission to the final thesis. This amounts to around 30 hours in total and serves to familiarise students with research from the perspective of the participants before they plan and carry out psychological studies themselves. Although this is time-consuming, it is really valuable as you become more familiar with some of the pitfalls in communication and the design of the study. You also learn how you feel as a "test subject".</p>



<p>Knowing about the feelings and concerns of the test subjects is not insignificant, because studies such as usability tests are also accompanied by a certain psychological pressure on the participants, which in turn can also have a negative impact on the test result.</p>



<h2 class="wp-block-heading">Possible fears of test subjects</h2>



<ul class="wp-block-list">
<li>"Stage fright" or performance anxiety. This pressure to perform can in turn affect self-confidence and self-efficacy and lead to a kind of self-fulfilling prophecy.</li>



<li>Test subjects might feel "stupid" if they can't solve tasks or find things in the system. They blame themselves and not the system, it feels like an IQ test to them.</li>



<li>They compare themselves inwardly with other users according to the motto: "Surely others don't behave so stupidly..."</li>
</ul>



<p>As moderators of a test, we should therefore always try to put ourselves in the participants' shoes: As test subjects, they are sitting in the "spotlight", i.e. they are the centre of observation and are asked to perform tasks with a product that is unknown to them and possibly not optimally usable in front of strangers. It is quite natural to be a little nervous in this situation. The participants then have thoughts such as "Am I doing this right?" or: "Do these people think I'm stupid because I didn't understand?"</p>



<p>This pressure is real. Jared Spool, an American usability consultant, told a story about seeing a test subject cry during a usability test. This situation came about due to an accumulation of careless behaviour on the part of the moderators and test leaders: the original participant didn't turn up on the day of the test and then an employee who had just completed her first day of work was simply brought in as a quick replacement. The team thought it was a good idea to use her because, unlike other employees, she knew little about the product. There were also a number of observers in the observation room who were not informed about how to behave during a test, including the test person's boss. And last but not least, no pilot test was conducted, so the team did not know that the first task the participants were supposed to complete was in fact completely outdated, incorrectly formulated and therefore impossible to perform. As the woman frantically tried to complete this first task, everyone but her quickly realised that the task was outdated and impossible to complete, and they started laughing at their own stupidity. Unfortunately, the user thought they were laughing at her and she then started to cry.</p>



<p>This is exactly the kind of situation we want to avoid.</p>



<h2 class="wp-block-heading">How to respond to these fears</h2>



<p>We can counter such fears by repeatedly emphasising, for example, that we are not testing the test person, but the system. This "<em>I'm too stupid to use the computer"</em>&nbsp;is unfortunately based on internalised thinking.&nbsp;<a href="https://www.hanselman.com/blog/bad-ux-and-user-selfblame-im-sorry-im-not-a-computer-person" target="_blank" rel="noreferrer noopener">Self-blaming</a>&nbsp;is a problem here. Many users of computer systems still think that it's their fault if something doesn't work, that they can't handle the software "properly" and blame themselves rather than the system.</p>



<p>Here is a sentence that we like to repeat like a prayer wheel in this or a modified form during tests:<br>"<em>We don't test you, we test the system."</em>&nbsp;and:&nbsp;&nbsp;<em>"Any problems you may have are the fault of the system. And we need your help to track down these problems in order to make the system better and easier to use. You help to assess/evaluate the system as we are too blind to it."</em></p>



<p>The wording/framing - i.e. how the test itself is labelled - also plays a role here. For example, when we talk about "user testing", this implies that we want to test the user. But this is not the case. We are testing the usability of the system, which is why it is more appropriate to speak of "usability testing / verification" - above all&nbsp;<em>before&nbsp;</em>the users.</p>



<p>Appreciative and respectful behaviour also plays a major role. Sounds obvious, but it is important to remember this time and again, especially when things get hectic.</p>



<p><strong>Therefore, here are some basic and simple ways to reduce psychological stress in test subjects and conduct a usability test according to ethical guidelines.</strong></p>



<h2 class="wp-block-heading">11 ethical principles for a usability test</h2>



<ol class="wp-block-list">
<li><strong>Language</strong>. We&nbsp;<a href="https://birdux.studio/en/ux-thoughts-language-matters/">argue in favour of avoiding the term "user test"</a>.  We don't test the user, we test a system with the help of the user.</li>



<li><strong>Creating a friendly atmosphere</strong>. The atmosphere should be relaxed and free from distractions and interruptions such as questions from others. Drinks and small snacks are always welcome during in-person tests.</li>



<li><strong>Let the test person arrive</strong>. A welcome and some small talk are important for participants to "arrive" and relax. This should therefore be included in the schedule.</li>



<li><strong>Informed consent.</strong>&nbsp;The informed consent should be given to the test subjects a few days before the test and signed and returned to the test team on the day of the test at the latest. This consent should contain information about the purpose of the test and the general procedure of the test. It should also clearly and precisely describe how the results will be used and how it will be ensured that the participant's data is treated confidentially. Participants should be informed comprehensively and clearly about how the data will be used. This means that there should be information about who can access the data, where and for how long the data is stored (GDPR) and how long participants can request the deletion of their data or its non-use - as this is usually difficult once data has been anonymised. The voluntary nature of participation and the possibility for participants to cancel the test at any time should also be mentioned. Furthermore, contact options should be offered for any questions.</li>



<li><strong>Repeat the clarification directly before the test.</strong>&nbsp;Before the test begins, it is best to briefly go through all the points that can also be found in the informed consent with the respondents and make sure once again that everything has been understood and that there are no more unanswered questions in this regard. It is advisable to reiterate the anonymisation of the data and explain to the participants that, for example, quotes that appear in reports or presentations cannot be traced back to the respective persons. It should also be pointed out once again that the test subjects are taking the test completely voluntarily and have the right to cancel the test at any time. This means that the test subjects are in control if, for whatever reason, they start to feel unwell, for example.</li>



<li><strong>Warm-up questions/pre-test interview.&nbsp;</strong>Before the actual test, where we start working on the tasks, it is a good idea to start with thematically appropriate general warm-up questions. These questions are usually of interest anyway and of a more open nature, e.g. questions about previous use of the product/service in question, questions about frequency of use, questions about specific experiences from the last use, etc. This conversation (pre-test interview) helps the respondents to "wind down" a little, relax a little and familiarise themselves with the environment and the moderators.</li>



<li><strong>Control non-verbal cues and facial expressions.</strong>&nbsp;While we are sitting together with the test subjects, we should never behave impatiently and also have implicit, unconscious behavioural patterns on screen - such as our facial expressions. This is easier said than done, because a lot of behaviour simply happens unconsciously. It is therefore a good idea to include this in the moderation script or to have the topic on the screen. An eyebrow raised at the wrong moment, breathing too loudly or nervously tapping your fingers can be completely misinterpreted and referred to by the respondent to themselves or their performance - with consequences for the test.</li>



<li><strong>Respect participants' time</strong>. It is a sign of courtesy if we respect the time of our participants: For us, this means being well prepared and not running over time. This is also where the special role of the so-called pilot test comes into play: the pilot test serves to test our test :) - the test of the test, so to speak. The pilot is very helpful to find out whether our tasks cause confusion or ambiguity or contain errors and whether the test conditions run "smoothly" overall. The pilot test runs exactly like a real test, including test person, consent, script etc... and takes place a few days before the first real test. This gives us enough time to make any corrections or formulate tasks in a more comprehensible way.</li>



<li><strong>After the test: Have data usage confirmed again</strong>. At the end of the test session, the moderators should again ensure/ask whether the respondent agrees to the use of the data. It is possible that the participants may change their mind during participation. As already mentioned, it is usually difficult or impossible to withdraw at a later date due to the anonymisation of the data records. This should be clearly communicated.</li>



<li><strong>For remote tests: offer technical setups the day before</strong>. For moderated remote test sessions, it makes sense to offer technical setups a day in advance, for example. This gives participants the opportunity to familiarise themselves with the technology, such as screen sharing, using a test page (not the test product itself!). Positive side effect: The test subjects get to know the test team, which reduces the excitement on the day of the test and any fear of contact.</li>



<li><strong>Transparent communication about additional observers</strong>. Sometimes managers or developers also want to and should (!) be able to watch the tests. This allows members of the team to experience the problems that users experience when using the system in question at first hand and can be important for establishing an understanding of UX in the company and for improving the user experience.&nbsp;<a href="https://birdux.studio/en/ux-maturity-models/">increase the UX maturity level of a company.</a>&nbsp;On the other hand, it can become problematic if&nbsp;<em>too many&nbsp;</em>people observe the test at the same time, as this adds another artificial dimension to the test by making users feel even more "observed" and "tested". This can increase the pressure to perform and the "performance anxiety", which in turn could distort the test results. It is therefore better to have fewer observers and rotate the team per test or test series instead of placing the entire team in the observation room.</li>
</ol>



<p><strong>If there are other people observing the test in addition to the moderators</strong>...</p>



<ul class="wp-block-list">
<li>... should always be pointed out transparently and it makes sense for these people to be present briefly during the welcome. It should always be communicated openly and honestly who is watching and why.</li>



<li>...., we must brief them well on how to behave before, during and after the test. It is important that there are no interruptions due to interposed questions or comments or, as mentioned above, unconscious gestures, facial expressions or noises (snorting, yawning, laughing, finger tapping). We like to tell the story of Jared Spool mentioned above, as it impressively demonstrates how quickly things can unintentionally go wrong.</li>



<li>...it is helpful to hand out some sticky notes to the observers, for example, to avoid interruptions in the form of questions from the observers. In this way, the feeling of an "urgent" question that needs to be asked immediately can be neutralised. Here, observers have the opportunity to write down their questions or thoughts directly without interrupting the test. They can then hand them over to us moderators at the end of the test. We briefly review them and then pass these questions on to the respondent if necessary. This should also be scheduled in advance. This procedure ensures that the moderators remain the main point of contact for the test person and there is no restlessness or confusion.</li>
</ul>



<h2 class="wp-block-heading">Conclusion: Make "test subject hours"!</h2>



<p>There is certainly a lot more to say, but these basic points should suffice for now. We can only recommend to all designers, product managers and developers that, like psychology students, they also complete "test subject hours" and often slip into the role of the test subjects themselves and take part in a usability test from this perspective. This gives you a sense of how it all feels and you can use this experience when designing your own test.</p>



<p><strong>Illustration:</strong>&nbsp;<a href="https://storyset.com/question" target="_blank" rel="noreferrer noopener">Question illustrations by Storyset</a></p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Which usability test do I need?</title>
		<link>https://birdux.studio/en/which-usability-test-do-i-need/</link>
		
		<dc:creator><![CDATA[Stefanie]]></dc:creator>
		<pubdate>Thu, 25 Aug 2022 09:08:35 +0000</pubdate>
				<category><![CDATA[Research und Evaluation]]></category>
		<category><![CDATA[usability testing]]></category>
		<guid ispermalink="false">http://neu.thegeekettez.com/?p=13060</guid>

					<description><![CDATA[In usability testing, two broad categories of tests can be distinguished - namely summative usability tests (summarising results) and formative usability tests ("shaping" the design). Which type of test you need depends on what you want to find out.]]></description>
										<content:encoded><![CDATA[<p><strong>At the&nbsp;<a href="https://birdux.studio/en/usability-tests-why-they-are-worthwhile/">Usability testing</a>&nbsp;two broad categories of tests can be distinguished - namely summative usability tests (summarises results) and formative usability tests ("shapes" the design).  Which type of test you need depends on what you want to find out. Let's take a look at the two major usability test categories.</strong></p>



<h2 class="wp-block-heading">Summative tests: Is our product efficient?</h2>



<p>With summative tests, the focus is usually on statistical key figures. Roughly speaking, the focus here is on efficiency - i.e. finding out or measuring whether the designed solution<em>&nbsp;efficient</em>&nbsp;is.</p>



<p><strong>Typical questions that such a test answers are, for example, whether the design fulfils a certain standard or criterion.&nbsp;</strong>On the one hand, these can be questions which&nbsp;<strong>the time required&nbsp;</strong>for the completion of a task, such as</p>



<ul class="wp-block-list">
<li>How long do test subjects need for task X?</li>



<li>Are participants able to complete task x in (e.g.) less than one minute?</li>
</ul>



<p>This is particularly relevant in an industrial context, in medicine and whenever devices (cars, aeroplanes) need to be controlled - in other words, wherever&nbsp;<em>Response times</em>&nbsp;play a role.  Summative tests are also used in the&nbsp;<strong>Benchmarking</strong>&nbsp;used - i.e. to make comparisons. Typical questions here would be, for example</p>



<ul class="wp-block-list">
<li>Is our product performing better than the competition?</li>



<li>Does Design A perform better than Design B?</li>



<li>Do more people click on the button for design A than for design B?</li>
</ul>



<p>A/ B tests are a typical representative of this area.</p>



<h3 class="wp-block-heading"><strong>Typical results of summative tests are based on numbers</strong></h3>



<ul class="wp-block-list">
<li>40% of our users were able to complete task x in less than 30 seconds.</li>



<li>Design A has a 40% higher error rate than Design B.</li>



<li>In Design A, 20% more people click on the button than in Design B.</li>
</ul>



<p>The results therefore relate to "how much", or "how long" - but usually do not answer the "why" behind a specific behaviour. It is called a "summative" test as it aims to summarise / "sum up" the results.</p>



<h3 class="wp-block-heading"><strong>Requirements for summative tests</strong></h3>



<p>As a rule, summative tests require a finished product or at least a fully functional prototype, as they must function "correctly" in order to be able to make a valid statement about efficiency.</p>



<p>Summative tests also require at least 20-30 (+/-) participants, depending on the statistical methods used. We therefore need someone who is familiar with statistics and the requirements for the respective statistical methods.</p>



<h2 class="wp-block-heading">Formative tests: How do users experience the product / design?</h2>



<p>Formative tests are more common within UX design processes as they can be used as part of an iterative design process. The aim of formative tests is to find out something about the participants' experience and behaviour - for example, participants should tell us directly if they find something confusing or funny. Unlike summative tests, these tests often answer the "<em>Why</em>" behind a specific behaviour.</p>



<p><strong>Formative tests answer questions such as:</strong></p>



<ul class="wp-block-list">
<li>How do people experience our design?</li>



<li>Where do they get stuck, and above all:&nbsp;<em>Why</em>&nbsp;do they get stuck there?</li>



<li>What are the biggest problems/challenges with our design that we should fix next?</li>
</ul>



<p><strong>Formative tests can be carried out early on in the design process</strong>&nbsp;<strong>help to identify optimisation potential.&nbsp;</strong>This means that formative tests should ideally be carried out very early on in the design process, e.g. with click dummies, in order to recognise initial problems and rectify them quickly.</p>



<p>A typical result of such a test is usually&nbsp;<em>more qualitative</em>&nbsp;instead of quantitative nature e.g:  "<em>Participants had difficulty completing task X because the buttons labelled OK / Cancel were confusing</em>."</p>



<p>Formative tests are therefore carried out when the aim is to uncover problems in order to identify further UX potential. They help us to "mould" the design for a product or service. Hence the name "formative". In contrast to summative tests, where we need more test subjects due to the requirements for statistical methods, formative tests with approx. 7-10 users can already identify some of the main problems that can then be optimised.</p>



<h3 class="wp-block-heading"><strong>Gain additional information with the method of</strong>&nbsp;<strong>Thinking aloud /Thinking aloud</strong></h3>



<p>Experience has shown that the reasons for cancellations often lie in the fact that users do not find their way around or feel poorly informed. And this can be found out wonderfully with formative, moderated usability tests and the "thinking aloud" method (Ericsson &amp; Simon, 1984). In "thinking aloud", participants constantly comment on their thought processes while interacting with the system. The aim of this is to gain additional information about the participants' cognitive processes while operating the system being tested: What is going through their minds while they are operating it? What questions do they have at the moment? In which knowledge structures do they categorise the information presented? What irritates or confuses them?</p>



<h4 class="wp-block-heading">Limitations of the method of thinking aloud</h4>



<p>Of course, we can only think aloud what we are conscious of. However, some - in fact, many - of our processes take place below the threshold of consciousness and therefore cannot be verbalised (Wilson, 1994).</p>



<p>This is important to understand and is the reason why we tend to favour this form of usability testing.&nbsp;<em>moderated&nbsp;</em>Recommend tests. Moderated usability tests are an excellent tool for finding out the WHY behind cancellations and poor conversion rates.</p>



<h4 class="wp-block-heading"><strong>React to subtle behavioural cues with&nbsp;<em>moderated</em>&nbsp;formative tests</strong></h4>



<p>In a moderated test - and this is the important thing - there is a&nbsp;<em>Real-time interaction&nbsp;</em>between the usability experts who moderate the test - i.e. guide the participants through it - and the test subjects. This means that we as usability experts sit together with the test participants either remotely via a video conference or on site and guide them through the test.  This is not possible with unmoderated usability tests, as there is no real-time interaction with the test subjects in unmoderated tests. More on this later.<br>Through constant observation during a moderated usability test, we are able to determine what is being thought out loud - i.e. what is being verbalised - independently of what is being said.&nbsp;<em>can</em>&nbsp;- additionally identify subtle cues in behaviour - so-called behavioural cues - such as facial expressions or squinting of the eyes, frowning etc., make a note of them and come back to these passages containing such subtle cues later after the actual test. These subtle behavioural cues are often an indicator that users feel insecure but are not necessarily (able to) verbalise it because - as explained above - they are not necessarily aware of it. The aim is therefore to return later after the actual test, in addition to the obvious problematic points, to exactly those points where such subtle behavioural cues were observable and to investigate in more depth whether something was not understood, uncertainty prevailed and what might have been going on. You can often obtain further valuable information by returning to the relevant points and having the respondents repeat things, for example, and asking specific questions.</p>



<h3 class="wp-block-heading"><strong>Unmoderated formative tests</strong></h3>



<p>Unfortunately, this targeted enquiry is not possible with unmoderated usability tests, as unmoderated usability test sessions are conducted by the participant alone, i.e. the test subjects usually carry out the test remotely from home using special online tools. These sessions are recorded in video and audio so that we as usability experts can view and analyse them afterwards. So this is where&nbsp;<em>none</em>&nbsp;Real-time interaction with the respondents takes place. Nevertheless, it is possible to build questions into the study, which can either be asked after each task (e.g. "<em>How difficult did you find that</em>?") or at the end of the session. However, these questions are usually standardised - i.e. the same for all participants. In unmoderated sessions, there is no opportunity to ask detailed questions that can be customised.&nbsp;<em>specifically to the behaviour of the respective participants</em>&nbsp;or to deal with the respondents in detail.</p>



<p>Other disadvantages can also be that people think less out loud in unmoderated sessions - simply because there is no one there to remind the participants. In unmoderated sessions, we have already observed that participants become increasingly silent over time. That's a shame, because you never know what's going through the participants' minds while they're working on the task.</p>



<p>In addition, test subjects may drop out, skip tasks or generally be rather unmotivated to complete the tasks. We often rarely find out what caused them to drop out, for example. Did the technology not work? Did they no longer feel like it? Were they interrupted or was the task too difficult? This could mean that some sessions cannot be analysed. With the moderated test in particular, the social pressure of direct observation creates a little more motivation to carry out the tasks or to engage with them.</p>



<p>The lack of detailed enquiries about specific problems that the respective test subjects had is a major disadvantage of unmoderated tests - especially for tests that are to be carried out in an early design phase. Unmoderated tests are often used because of their alleged time savings. Of course, you save the time in which the moderators interact 1:1 with the participants - however, in our opinion, this is often accompanied by a not inconsiderable loss of knowledge, which we have described above. In addition, an unmoderated usability test requires exactly the same - if not more - planning as a moderated test. If, despite all this, an unmoderated test session is to be carried out, we only recommend this for systems that are functional, such as live websites, as non-functional aspects could raise too many questions in a click dummy, for example. If in doubt, we always recommend a moderated session instead of an unmoderated session, as moderated sessions generally provide more insight.</p>



<p>So we can see which type of usability test should be carried out - summative or formative and moderated or unmoderated - depends on what and how exactly you want to find out. Summative tests can help with a functional prototype or a finished product to provide information about the efficiency of a product and formative tests help either very early on in the design process or with finished products to identify problematic areas and therefore further UX potential. In formative tests, moderated tests offer the great advantage of specific follow-up questions and thus significantly increase the chances of gaining detailed insights into the user experience and thus valuable knowledge for improving and optimising the UX of the system.</p>



<p><strong>Literature</strong></p>



<ul class="wp-block-list">
<li>Ericsson, K. A., &amp; Simon, H. A. (1984). Protocol analysis: Verbal reports as data (p. 426). The MIT Press.</li>



<li>Wilson, T. D. (1994). The Proper Protocol: Validity and Completeness of Verbal Reports. Psychological Science, 5(5), 249-252.&nbsp;<a rel="noreferrer noopener" href="https://doi.org/10.1111/j.1467-9280.1994.tb00621.x" target="_blank">https://doi.org/10.1111/j.1467-9280.1994.tb00621.x</a><strong></strong></li>
</ul>



<p><strong>Illustration</strong></p>



<p><a href="https://storyset.com/web" target="_blank" rel="noreferrer noopener">Web illustrations by Storyset</a></p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Usability tests - why they are worthwhile</title>
		<link>https://birdux.studio/en/usability-tests-why-they-are-worthwhile/</link>
		
		<dc:creator><![CDATA[Stefanie]]></dc:creator>
		<pubdate>Wed, 27 Jul 2022 09:25:26 +0000</pubdate>
				<category><![CDATA[Research und Evaluation]]></category>
		<category><![CDATA[usability testing]]></category>
		<guid ispermalink="false">http://neu.thegeekettez.com/?p=13040</guid>

					<description><![CDATA[Usability tests are a popular and promising evaluative method for uncovering problems in the operation of software, websites and apps. In a usability test, a UX researcher (the moderator) observes the behaviour of a test person while they perform specific tasks within a product (e.g. a website, an app) and obtains user feedback. Usability tests can [...]]]></description>
										<content:encoded><![CDATA[<p><strong>Usability tests are a popular and promising evaluation method for uncovering problems in the operation of software, websites and apps.&nbsp;</strong>In a usability test, a UX researcher (the moderator) observes the behaviour of a test person while they perform specific tasks within a product (e.g. a website, an app) and obtains user feedback.<strong>&nbsp;</strong>Usability tests can and should ideally be carried out relatively early on in the design or development process as a supplement to other research methods.</p>



<p>You often hear the term "user testing". We recommend removing this term from your vocabulary, especially in front of users. We do not "test" our users, but our users test our system for us, because we are usually too blind for that.</p>



<h2 class="wp-block-heading"><strong>But what are the benefits of usability tests?</strong></h2>



<h5 class="wp-block-heading">Here are four good reasons why you should invest time in usability testing.</h5>



<h3 class="wp-block-heading"><strong>#1 Conversion optimisation</strong></h3>



<p>Usability tests are generally very good at identifying problems in the operation of systems such as software or websites and apps and finding out the reasons for cancellations so that they can then be rectified. Opportunities and potential for optimising systems are therefore uncovered. In addition, if you carry them out regularly and often, you learn a lot about how users think and act.</p>



<h3 class="wp-block-heading"><strong>#2 Reducing support costs</strong></h3>



<p>Costs can be saved, especially for large companies with customer support. For example, if users have problems finding their way around a website or using it, customer support enquiries often increase. With the help of usability tests and the identification of problem areas and subsequent optimisation, these enquiries can be reduced. This pays off in the long term.</p>



<p>Our tip here is: Simply ask customer support to list the top 3 problems of the last few months and then carry out a usability test with regard to these weak points.</p>



<h3 class="wp-block-heading"><strong>#3 Reducing development costs</strong></h3>



<p>Would you build a house without the assessment of a structural engineer? Probably not. Unfortunately, what is a matter of course in architecture does not necessarily apply to software development. Apparently we can afford to build everything here without an initial evaluation, then - because it doesn't work - tear everything down again and then build it all over again.</p>



<p><strong><em>"</em></strong><em>We are following suit</em>", "<em>Unfortunately no time to test at the moment"</em>&nbsp;or: "<em>It's so time-consuming, we'll do it later"&nbsp;</em>- Unfortunately, these are all statements that we usually hear when developing products.</p>



<p>But what does it actually mean when&nbsp;<em>after&nbsp;</em>the website has already been fully programmed, it turns out that the users do not get along with the navigation, for example, by hailing complaints or it becomes apparent via analysis tools that there are many cancellations or that visitors leave the site?</p>



<p>In plain language this means:</p>



<ol class="wp-block-list">
<li>The first step is to find out exactly what the problem is.</li>



<li>A new concept must then be developed based on these results.</li>



<li>The new concept must then be reprogrammed, which can usually be very time-consuming and therefore very expensive!</li>
</ol>



<p>However, if usability tests are already carried out during the design process, such weak points can probably be identified in advance, i.e. before a single line of code has been written, and analysed more closely. You could already work in the right direction conceptually and save a lot of money and time in development.</p>



<p><strong>In the words of Joyce Durst in "Cost-Justifying Usability (Bias, 2005)</strong></p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>"<em>It costs much less to code the interface in a customer acceptable way the first time than it does to introduce a poor UI in the field and then rework that UI in version two. In addition, a poor UI will increase support costs."&nbsp;</em></p>
</blockquote>



<h3 class="wp-block-heading"><strong>#4 Increase employee satisfaction</strong></h3>



<p>Good usability of enterprise software can increase the satisfaction of the company's employees and thus also work efficiency and effectiveness by reducing stress. Poorly usable tools, such as software that does not function smoothly, are an obstacle to work and therefore a potential cause of stress in the workplace. (Frese &amp; Zapf, 1994)</p>



<p>Imagine having to write at work with a biro that is very awkward to hold and often just doesn't write anything - nobody can cope with that for long - so you just grab the next one.</p>



<p>Interestingly, it's not that rare for software in the workplace to not run as smoothly as you would like, but you can't just replace it quickly like a biro. In addition, most people have become accustomed to the fact that "<em>the technology doesn't want it again</em>" or blame themselves.</p>



<p>This can cause stress in everyday working life, which can have a detrimental effect on employees and the organisation. Stressed employees can fall ill more often, may enjoy their work less and are therefore less efficient and effective - also because they simply need more time to deal with software that is difficult to use, or have to look things up or ask colleagues again and again.</p>



<h2 class="wp-block-heading"><strong>Final Thoughts</strong></h2>



<p>So we can see that good usability and UX pays off and leads to time and cost savings and benefits for the company!</p>



<p>Depending on what you want to find out, there are two broad categories of tests - namely summative usability tests (summarises results) and formative usability tests (shapes the design within iterative design processes) and we'll look at these in another post.  Until then!</p>



<p>Would you like to find out more about usability testing? Get in touch with us!&nbsp;<a href="https://birdux.studio/en/contact-the-geekettez/">We look forward to hearing from you.</a></p>



<p><strong>Literature</strong></p>



<ul class="wp-block-list">
<li>Bias, R. G. (2005). 22 Chapter - Cost-Justifying Usability: The View from the Other Side of the Table. In R. G. Bias &amp; D. J. Mayhew (Eds.),&nbsp;<em>Cost-Justifying Usability (Second Edition)</em>&nbsp;(pp. 613-621). Morgan Kaufmann.<a href="https://doi.org/10.1016/B978-012095811-5/50022-5" target="_blank" rel="noreferrer noopener">&nbsp;https://doi.org/10.1016/B978-012095811-5/50022-5</a></li>
</ul>



<ul class="wp-block-list">
<li>Frese, M., &amp; Zapf, D. (1994). Action as the core of work psychology: A German approach.&nbsp;<em>Handbook of Industrial and Organisational Psychology</em>,&nbsp;<em>4</em>.&nbsp;<a href="https://www.researchgate.net/publication/232492102_Action_as_the_core_of_work_psychology_A_German_approach" target="_blank" rel="noreferrer noopener">https://www.researchgate.net/publication/232492102_Action_as_the_core_of_work_psychology_A_German_approach</a></li>
</ul>



<p><strong>Photo&nbsp;</strong><a href="https://unsplash.com/es/@dtravisphd?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText%22" target="_blank" rel="noreferrer noopener"></a></p>



<p><a href="https://unsplash.com/es/@dtravisphd?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText%22" target="_blank" rel="noreferrer noopener">David Travis</a>&nbsp;on&nbsp;<a href="https://unsplash.com/?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noreferrer noopener">Unsplash</a></p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>What UX experts can learn from journalists.</title>
		<link>https://birdux.studio/en/what-ux-people-can-learn-from-journalists/</link>
		
		<dc:creator><![CDATA[Stefanie]]></dc:creator>
		<pubdate>Thu, 07 Jul 2022 09:11:09 +0000</pubdate>
				<category><![CDATA[Experience Design]]></category>
		<category><![CDATA[Research und Evaluation]]></category>
		<category><![CDATA[qualitative research]]></category>
		<category><![CDATA[user research]]></category>
		<guid ispermalink="false">http://neu.thegeekettez.com/?p=12844</guid>

					<description><![CDATA[The problem with surveys as a design research tool. Yes, we know: using surveys for ux research allow the collection of large amounts of data in a relatively short time. This is why there is conventional wisdom that surveys are easy and cheap, which is not entirely true. A good questionnaire design is hard work and often [...]]]></description>
										<content:encoded><![CDATA[<h2 class="wp-block-heading">The problem with surveys as a design research tool.</h2>



<p>Yes, we know: using surveys for ux research allow the collection of large amounts of data in a relatively short time. This is why there is conventional wisdom that <em>surveys are easy and cheap,</em> which is not entirely true. A good questionnaire design is hard work and often means investing in pre-tests to ask questions that will lead to meaningful answers. However, these beliefs about surveys are the reason why there is a high tendency to use surveys in design research. But this is a fallacy.</p>



<p>Surveys work well when you want to know about the demographic structure of a population or general opinions (a.k.a. trends) on already-known topics. They are an excellent tool if you want to evaluate hypotheses. But if that is not the goal of your UX Research we suggest not using them in UX Research. Here is why:</p>



<p>There is something called the&nbsp;<a href="https://en.wikipedia.org/wiki/Social_desirability_bias" target="_blank" rel="noreferrer noopener">social-desirability bias</a>which is the tendency to respond in a way that makes people look better than they are. For example, a survey respondent might report that they engage in more healthy or "better" behaviours than they actually do. Interviews or - even better - user observations or diary studies allow us to navigate the response bias more efficiently, as you observe the interviewees' reactions and/or by asking better follow-up questions.</p>



<p>In addition, surveys are designed to be&nbsp;<em>standardised</em>. This means everybody gets the same questions and - this is the point - the same options to choose an answer from. Standardisation is important for surveys so results can be generalised to a larger population due to comparable answers. To achieve this surveys most often use close-ended questions, providing people with a range of possible answers. Some examples:</p>



<p><em>Are you satisfied with solution A?</em><br>A) Yes<br>B) No<br><em><br>Which colour do you like most?</em><br>A) Red<br>B) Blue<br>C) Green</p>



<p>These questions make the results quantifiable: "<em>35% said that they like blue, but 75 % prefer green.</em>"</p>



<p>But these types of questions do not tell us&nbsp;<em>why</em>&nbsp;people do not like blue as much as red. Sure we can implement those questions in our survey and have fun mixing qualitative data with quantitative data when evaluating large amounts of accumulated data. However, we cannot individually address the answers of individual people, ask targeted follow-up questions to that specific answer and dig deeper.</p>



<p>Instead of relying mindlessly on quantitative data generated with surveys, we would recommend sitting down and doing 10 in-person interviews, especially if your research includes the exploration of a new topic or domain. Yes, you will receive fewer answers, but you are guaranteed to receive more insights into topics, which will also lead to a deeper understanding of the&nbsp;<em>why</em>.</p>



<h2 class="wp-block-heading">Open-ended questions in interviews vs closed-ended questions in surveys</h2>



<p>In contrast to closed-ended questions used in surveys, open-ended questions in interviews are intended to give the interviewee space to provide us with more detailed answers. For example:</p>



<p><em>What do you think of XYZ</em>?<br><em>Can you tell me a bit more about XYZ (...</em>)?</p>



<p>Using open-ended questions is especially important if you are unfamiliar with a topic, or you need some background information on that topic and want to dig deeper. Additionally, this will allow us to see *<em>how</em>* people react: non-verbal cues are valuable information - especially during the early stages of topic research when we begin building hypotheses. All this provides valuable information we will not get with a survey.</p>



<h2 class="wp-block-heading"><strong>Unleash your inner&nbsp;</strong>j<strong>ournalist</strong></h2>



<p>When you listen to someone and let them speak using their own words, you will get new insights which will lead to new possibilities and opportunities that you may not have considered by only asking close-ended questions.</p>



<p>You also get insights into their thinking - their mental models and schemas, and the vocabulary they use to describe things. Journalists often start their conversations with people with simple open-ended questions:</p>



<p><em>Tell me a little bit about yourself!</em><br><em>How often do you use XY</em>?<br><em>Why don't you use XY</em>?<br><em>Tell me more about that experience</em>.</p>



<p>Using open-ended questions can also be a good follow-up to closed-ended questions:<br><em>Do you like XYZ?</em><br>A) Yes<br>B) No<br>C) Don't know</p>



<p><em>What exactly don't you like</em>?<br><em>Can you describe it a little bit more</em>?<br><em>Why do you think this might not work for you</em>?</p>



<p>By asking open-ended questions you allow your interviewee to expand upon why they think this specific layout/thing/whatever might not work. Depending on those expanded answers you can react, i.e. ask further follow-up questions, which often lead to new insight - a vital benefit you would be missing out on by just presenting a survey to a bazillion people and waiting for their response.</p>



<p>In short, quantity is not always better or more truthful than quality. You will also be surprised by all the new insights you get through in-person interviews and/or observations which is especially important when we conduct design research, especially if this research is the basis for design decisions.</p>



<p><strong>Photo</strong></p>



<p><a rel="noreferrer noopener" href="https://unsplash.com/@mrbrodeur?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank">Matthew Brodeur</a>&nbsp;on&nbsp;<a rel="noreferrer noopener" href="https://unsplash.com/collections/1612964/neon?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank">Unsplash</a></p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>User research biases to be aware of: Demand characteristics</title>
		<link>https://birdux.studio/en/user-research-biases-to-be-aware-of-demand-characteristics/</link>
		
		<dc:creator><![CDATA[Stefanie]]></dc:creator>
		<pubdate>Wed, 14 Mar 2018 08:44:53 +0000</pubdate>
				<category><![CDATA[Research und Evaluation]]></category>
		<category><![CDATA[usability testing]]></category>
		<category><![CDATA[ux research]]></category>
		<guid ispermalink="false">http://neu.thegeekettez.com/?p=11019</guid>

					<description><![CDATA[Demand characteristics are one of the so-called experimenter effects in behavioural research. They describe the tendency of interview/usability test participants to give you (the experimenter) what you want based on what the participants think and what you might expect from them. This doesn't require that you tell them what you want explicitly and give them [...]]]></description>
										<content:encoded><![CDATA[<p>Demand characteristics are one of the so-called experimenter effects in behavioural research. They describe the tendency of interview/usability test participants to give you (the experimenter) what you want based on what the participants think and what you might...</p>



<p>This doesn't require that you tell them what you want explicitly and give them obvious cues. Participants only need to guess/hypothesise what you want from them based on some subtle cues you send out which are often based on your implicit opinions. So the assumption your participants make regarding what you might want from them is sufficient enough. For example, a general assumption which occurs during usability testing sessions could be that you want them to like the product you'll be testing.</p>



<p>It's important because demand characteristics may put the complete validity of your usability test/interview at risk.</p>



<p><strong>Here's how to weaken/temper the effect:</strong></p>



<ol class="wp-block-list">
<li>Make clear at the beginning and throughout your test or interview that you want to hear honest feedback. Try to take the fears of non-desirable answers away.</li>



<li>Be aware of subtle cues and nonverbal language your participant sends out. Seems the answer is forced. Is he/she struggling? Take notes during the test, ask afterwards in the debrief interview, and eventually play the scenario through again.</li>



<li>In addition to an in-person usability test/interview, provide a post-test questionnaire like for example the&nbsp;<a href="https://measuringu.com/sus/" target="_blank" rel="noreferrer noopener">SUS (System Usability Scale)</a>&nbsp;which is a quantitative tool to measure the perceived usability</li>



<li>Last but not least: Demand characteristics may be one of the reasons the designer, who designed the system is not always the right person to conduct a usability test or do debriefing interviews, because he or she may send subtle positive cues about the system he/she designed and so participants might react more positively than they feel about the task or question you gave them.</li>
</ol>



<p>For this reason - even though we also claim research should not be outsourced but should be conducted by the interaction design team itself - it may be worth considering another person to conduct the test or interview - in the best case people that are not directly involved in the design or development team. If this is not possible, it's important to be aware of this bias throughout your test.</p>



<p><strong>Comic image&nbsp;</strong></p>



<p><a href="http://www.markstivers.com/wordpress/?p=67" target="_blank" rel="noreferrer noopener">http://www.markstivers.com/wordpress/?p=67</a></p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>UX research methods - In which cases are user interviews actually appropriate?</title>
		<link>https://birdux.studio/en/ux-research-methods-in-which-cases-are-user-interviews-actually-appropriate/</link>
		
		<dc:creator><![CDATA[Stefanie]]></dc:creator>
		<pubdate>Tue, 10 Nov 2015 13:37:06 +0000</pubdate>
				<category><![CDATA[Experience Design]]></category>
		<category><![CDATA[Research und Evaluation]]></category>
		<category><![CDATA[design process]]></category>
		<category><![CDATA[interview]]></category>
		<category><![CDATA[user experience]]></category>
		<category><![CDATA[ux research]]></category>
		<guid ispermalink="false">http://neu.thegeekettez.com/?p=10924</guid>

					<description><![CDATA[In the design process, you can use various methods or a mix of methods to make the product, website or app "better" in the sense of "user-centred" - in other words, you gain valuable insights into the world of the user. In UX design, we like to draw on the repertoire of methods that have proven themselves in the social sciences. Therefore [...]]]></description>
										<content:encoded><![CDATA[<p><strong>In the design process, you can use various methods or a mix of methods to make the product, website or app "better" in the sense of "user-centred" - in other words, you gain valuable insights into the world of the user.</strong></p>



<p><strong>In UX design, we like to make use of the repertoire of methods that have proven themselves in the social sciences. So I thought we'd give you a few insights into some of these methods 🙂</strong></p>



<p><strong>The first part starts with a well-known and popular method - the qualitative user interview - and the fundamental question: When does it even make sense to conduct one?</strong></p>



<h2 class="wp-block-heading">First: The role of narrative interviews in the design process</h2>



<p>Interviews can play a major role in the design of (digital) products - once <strong>a) in the area of user interviews</strong> e.g. to "tease out" attitudes or motivations on a certain topic or to gain "expert insights" into complex, unfamiliar topics, or<strong> b) in the area of stakeholder interviews</strong>to find out which goals are being pursued from a product/business perspective - in other words: to find out the core of the goals and to check whether there are serious differences of opinion/discrepancies here, i.e. in which direction things are heading.<br>For the sake of simplicity, I am writing here based on case a ) - the user interviews.</p>



<h2 class="wp-block-heading">Isn't it easy?</h2>



<p><em>"Let's just do a few quick interviews, then we'll know..."</em></p>



<p>The interview is a very popular method for obtaining data. By interview here I mean a mostly qualitative <em>Data collection method</em> - so a <em>narrative</em>i.e. an open, "narrative" oral interview with one or more people. Qualitative, as I am referring here to a more open-ended interview. There are also fully standardised interview methods that belong to the <em>quantitative data collection methods</em> and have no/less narrative - i.e. qualitative - character. In such cases, the survey is conducted using a strictly standardised questionnaire and is not kept flexible/narrative, but these are not dealt with here for the time being.</p>



<p>The interview is so popular because it <em>apparently </em>is so easy to carry out. It's easy - anyone can ask questions, you might think. But unfortunately it's not. A lot of things have to be taken into account if you want to obtain meaningful findings that will lead you further in the process.</p>



<h4 class="wp-block-heading">Fig 01 First rule of qualitative research</h4>



<p>If you have ever conducted an interview, you will have realised how complex it can be. Conducting an interview is also quite time-consuming - and therefore of course expensive. Just think of it:</p>



<ul class="wp-block-list">
<li>the recruitment of people</li>



<li>the preparation (decide what kind of interview, prepare questions)</li>



<li>the actual realisation</li>



<li>the evaluation (e.g. transcription)</li>



<li>plus the subsequent analysis</li>
</ul>



<p>On the other hand, interviews provide you with insights<em> Why </em>people display a certain behaviour that you are often denied with purely quantitative tests. It therefore gives you the opportunity to generate new ideas and assumptions, <em>Why</em> something could be like this (hypothesis generation), which you can then scrutinise and test (hypothesis testing). We'll come back to this in more detail in a moment.<br>In other words, you should first weigh up very clearly whether an interview will help you to answer your question at all or whether another method might be better suited to answering the desired questions.</p>



<h2 class="wp-block-heading">An interview can help you in these cases</h2>



<p><strong>1) Exploration of a completely new subject area:</strong> If you do not yet have any information about the subject area, i.e. you are at the very beginning and are "poking around" in a topic without a plan. Your intention is to gain initial insights into a topic and roughly explore the area in question. In the best case scenario, the interview will then answer these questions: <em>What, how, why?</em></p>



<p><strong>2) When the "how" is there, but the "why" is missing:</strong> If you already have information, but there are still gaps in your knowledge of certain processes. For example, tests may have already been carried out: you may have identified a certain behavioural pattern in a usability test or through other observations (user observation, Google Analytics or questionnaires) (<em>like</em>/how do people behave), but now we are in the dark, <em>Why</em> this pattern of behaviour exists. Through an interview, you can try to <em>"why"</em> to get on the track.</p>



<p><strong>3) As useful preparation for a test: </strong>Of course, you can also use interviews to collect material for quantitative, hypothesis-testing tests (aka the "hard facts", such as questionnaires, usability tests, A/B tests).<br>In most questionnaires, for example, the answers are already predetermined by the researchers. This is called a "closed response system", which can of course be an objection to such survey measures. Or: as UX experts, we specify the wording and text for products, software programmes or websites and wonder why it is not understood.</p>



<p>The crux of the matter is the same in both cases: You don't know whether these linguistic categories, which originated in the brains of the researchers/UX experts, also correspond to the mental concepts/schemes of the test subjects/test persons/user groups. They may think in completely different "categories".<br>Therefore, a very good method is to conduct interviews with the target test group in advance - i.e. before a test (or questionnaire) is developed - on the exact topic of the planned test. This gives you an insight into the way of thinking, i.e. also into the formulations used by the test candidates. The same effect can often be seen with company websites, for example, which describe their competences/services in their own "internal language", but the <em>internal</em> is self-evident, but as a layperson or newcomer to the field, you don't understand a word.</p>



<p>Here you can make good use of an interview, for example, to design the labelling of navigation points in a way that is appropriate for the user group: i.e. using a spelling and a choice of words that corresponds to the way users think.<br>This makes sense, for example, for questions relating to specific professional groups (experts vs laypeople), or different age groups (children, adults, older people) etc.. The question that is answered here is therefore: "How" / In which categories do different groups of people think in order to describe a situation or topic?<br>This is essentially the same approach that is used for open card sorting.</p>



<h2 class="wp-block-heading">The research cycle</h2>



<p>As you can see, interviews are (mostly) qualitative in nature and are more suitable for generating hypotheses than for testing hypotheses. These new assumptions that you make on the basis of interviews can then be tested in more detail in further tests to see whether this new hypothesis/assumption is correct.<br>It works like a cycle: qualitative exploration - quantitative testing - qualitative exploration - quantitative testing, etc. 🙂</p>



<h4 class="wp-block-heading">Fig 02 - The scientific method. Once at the bottom, you can start again from the beginning</h4>



<p>As interviews are so time-consuming, you will usually only interview a limited number of people. Let's say you interview 5-10 selected people. Of course, you can't draw conclusions about the general public from this small number of responses - i.e. you can't apply these new findings to a certain user group, for example. Never ever.</p>



<p>One also speaks of low "external validity "* - i.e. the data obtained cannot be generalised.<br>However, since interviews - as already mentioned - generally have more of an "exploratory"/hypothesis-generating character, i.e. they first serve to "get a taste" of an area, this non-existent generalisability is not quite so important, as the hypothesis is only tested in further investigations/tests. You then check later: Is this really the case, or did the assumptions/findings from the interview only come about by chance?</p>



<p>As a general rule of thumb, you can say: think carefully about whether an interview will really help you with your question. A qualitative interview always makes sense if you first want to roughly "pre-sort". For example, if you want to get an idea of attitudes and/or motivation on a certain topic. Later in the process, you can then check this assumption in tests.</p>



<figure class="wp-block-image aligncenter size-full"><img loading="lazy" decoding="async" width="299" height="169" src="https://birdux.studio/wp-content/uploads/2018/09/notime.jpg" alt="" class="wp-image-11319"/></figure>



<p>This all sounds like a lot of effort, and as we all know from many projects, unfortunately there is rarely enough time. Of course, you can also do the whole thing "lofi" and "quick and dirty". Especially when it comes to the mental concepts mentioned above (language, wording), you will quickly gain interesting insights that can broaden your own horizons.</p>



<p>You should just keep in mind that such a "quick &amp; dirty" survey is very likely to provide a distorted picture of reality, and you should at least document/record or communicate this somehow. This should not become a habit 🙂</p>



<p>Next time, we'll take a closer look at the different types of interviews.</p>



<h4 class="wp-block-heading">Key takeaways - When to consider a user interview:</h4>



<ul class="wp-block-list">
<li>Reflect: Is the interview a sensible method in my case? What do I want to achieve, what question am I trying to answer?</li>



<li>Always keep this in mind: An interview is more about gaining new insights/assumptions and <em>not</em> the testing of assumptions (hypothesis generation instead of hypothesis testing)</li>



<li>Consider the time required (recruitment, preparation, implementation, evaluation)</li>



<li>Keep in mind: The data collected cannot be transferred to the general public under any circumstances due to the small number of people who were randomly selected</li>
</ul>



<p>* <em>Validity is a quality criterion in the social science research process</em></p>



<p>Photo credit: <a href="https://www.flickr.com/photos/cloneofsnake/14150603002/" target="_blank" rel="noopener">https://www.flickr.com/photos/cloneofsnake/14150603002/</a></p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>