[:de]
At KI.M we create quality intelligently and methodically.
This is our passion!
That's why we are guided in our actions by current scientific findings and megatrends such as digitalization, globalization, usability, and mobility, and develop innovative, state-of-the-art solutions. For over ten years, we have been offering our customers digital solutions such as online tests, tablet and Remote assessments to.
In the context of the Corona crisis with social distancing and restricted freedom of travel, remote assessment in particular is gaining relevance and attention. In recent weeks, we have Customers in the digitalization of their assessments supported us and of course also dealt with the question of the quality of the aptitude diagnostics in the online assessment.
What is a remote assessment?
How can I ensure quality remotely?
Can a remote assessment provide the same high quality as a face-to-face assessment? We assess the challenges based on our AI.Standards for Assessment Center Conception and Moderation and show possible solutions and even advantages:
1. Clarification of the orderWe strengthen acceptance by involving relevant stakeholders and integrating the Assessment Center (AC) into existing HR processes.
Due to the implementation in a virtual setting, additional stakeholders are involved when clarifying the assignment for the remote assessment (e.g. IT and data protection).
Right at the start of the project, we discuss the hardware and software requirements, data protection guidelines and storage formats (e.g. interface to SAP) with the customer's responsible persons (IT and data protection) and carry out a technical check (e.g. video conferencing system).
Through virtual stakeholder workshops Even greater participation can be achieved via video conferencing, as travel costs are eliminated and stakeholders in distant locations can be included.
2. Requirements analysisWe define a requirements profile with success-critical aspects of the target function for a valid aptitude assessment in the assessment center.
Requirements profiles and their operationalization as well as success-critical situations can differ in the face-to-face and virtual working world (e.g. presentation skills and non-verbal communication in face-to-face vs. virtual settings).
We recommend identifying success-critical situations in the requirements analysis that could realistically occur virtually in everyday work. Furthermore, during operationalization, care should be taken to ensure that the behavioral anchors are observable in the virtual setting.
Stakeholders at distant locations and from home offices can also be included in virtual workshops – this increases participation. By using the KI.PaiRs App, whiteboard and survey functions allow information to be collected and documented very efficiently.
3. ConceptionWe design the assessment center based on requirements analysis and empirical evidence in a realistic, systematic, and valid manner.
Sharing and working on the exercise materials takes place virtually. On the one hand, this may limit the ability to work on screen compared to paper. On the other hand, the more private setting (e.g., working from home) may lower the barrier to sharing materials.
We make the exercise materials less complex for participants, ensure a clear presentation, and increasingly convey information through role-players. It can also be useful to work with parallel versions and ensure regular exchange of scenarios.
Instead of using complex case studies, analytical and planning skills in particular can be developed more efficiently, more resource-efficiently and more objectively through automated evaluations Online performance tests recorded (e.g. KI.JobIQ and AFB with automated report).
4. Preliminary proceedingsWe lay the foundation for the acceptance and quality of the assessment center through systematic pre-selection, transparent communication, and in-depth training.
Both participants and observers may have varying levels of experience in using video conferencing and assessment systems and thus influence their performance and assessment.
We conduct web-based training on the technical use of the video conferencing system for participants and process participants (in addition to the assessment software for observers). Exercises should also be tested virtually, and technical affinity should be taken into account when composing observer groups.
The virtual implementation of participant introduction and observer training via web training eliminates travel expenses, observers practice observation and assessment in a virtual setting, and technical competence can even be strengthened beyond the application in the assessment center.
5. ModerationWe ensure diagnostic quality, a respectful atmosphere, and efficient procedures.
Observation via video conference is cognitively more demanding, building relationships and trust is less intuitive, and the remote setting is more susceptible to external disruptions (e.g., use of the internet connection or disruption by family members, power outage).
We reduce the observation time per day and send the schedule and instructions for setting up the remote setting (e.g., environment, background, workstation equipment) along with the invitation. At the assessment center, all participants introduce themselves in person and via video, and our KI.M assistant provides technical support in case of disruptions.
Participants are in their familiar surroundings, which can reduce nervousness and allow for individual breaks without the presence of other participants in the proceedings. Furthermore, other participants can join in temporarily (e.g., to get to know the team, expert observers).
6. DecisionWe make reliable statements through a requirements-based and rule-based observation, evaluation, and decision-making system.
Perception of physical presence and emotional resonance is limited, and personal distance is increased, which, according to scientific evidence, can lead to more harsh assessments (Sears et al., 2013). Observations and assessments must be consolidated virtually.
We sensitize observers to these scientific findings and recommend specific settings for optimal perception of participants. By using the Assessment software KI.PAT The observations and evaluations are transmitted securely in encrypted form and aggregated automatically.
The objectivity of the observation, evaluation, and decision-making system is increased because the software enables direct observer feedback and eliminates transcription errors caused by manual evaluation entries. KI.PAT also reduces the effort required for documentation and data entry.
7. Follow-up processWe promote acceptance and employee development through appreciative feedback and targeted reports.
In-person feedback requires a safe and trusting environment. The transfer of feedback can be difficult due to media disruption (e.g., presentation skills and nonverbal communication in face-to-face vs. virtual settings).
We also recommend using a conferencing system with video functionality for feedback. We also encourage the transfer to the different requirements of the virtual and analog working world by providing differentiated presentations and recommendations in feedback discussions and reports.
Through virtual implementation, the feedback session can be more easily decoupled from the time (e.g., the following day) and other participants in the personnel development process (e.g., manager, coach) can be directly involved. Virtual services from our KI.M Center Training & Development can support in-depth transfer (e.g. through remote coaching, web training and virtual case consultations).
8. SustainabilityWe ensure the sustainable quality of the assessment center through regular empirical evaluations and review meetings.
Whether there is an influence on acceptance (e.g., remote assessment is still little known), fairness (e.g., technical competence for specific groups) and difficulty (e.g., indications for stricter assessment) has not yet been conclusively clarified.
We are currently conducting a empirical study on the acceptance of face-to-face and remote assessments Initial results indicate a high level of acceptance for our remote assessments. We regularly review fairness and internal structure on a client-specific basis.
The assessment software KI.PAT allows observers to receive individual feedback on their assessment behavior (rater agreement). Evaluations are also generated automatically, and data can be exported for further analysis.
Overall, we draw a positive conclusion regarding the implementation of our AI standards in a remote setting and can say: Yes, a remote assessment can provide the same high quality as an in-person assessment and even offer advantages!
The AI.STANDARDS for download!
[:]