Leading High Performing Remote Teams
How can leaders ensure that performance remains high in remote or hybrid-work environments?
Content Marketing
In this course, you’ll learn how compelling blogs, videos, podcasts, and other media can reach customers and drive sales. You’ll also learn steps for creating an effective content marketing plan, and some important ways to measure its impact and success.
Content marketing is a essential digital marketing strategy for companies looking to provide relevant and useful information to support your community and attract new customers.
Get started on your content marketing journey today.
Sustainable Innovation in Times of Disruption: Choices for a Better Society
There are opportunities for progress all around us. The key is to innovate on these opportunities sustainably.
To help identify most effective path forward, you'll need to gain a global perspective to these challenges in an open discussion. How can Japan and the world take action to create a more sustainable, innovative world? Where do you fit in?
It's time to find out.
Social Media & Digital Communications: Impact on Global Public Opinion
Social and digital media have dominated the communications industry for decades. But it's no secret that social media has the power to sway public opinion, and the way in which many companies use these platforms could be seen as manipulative.
What do companies need to be aware of when utilizing social and digital media? How can these mediums be used to better communicate strategically with the world?
Discover what top media and communications experts have to say.
CAGE Distance Framework
Want to expand overseas? The CAGE distance framework can help ensure you're constructing a solid global strategy in four areas: cultural, administrative, economic, and geographic. Learn how to leverage useful differences between countries, identify potential obstacles, and achieve global business success.
Servant Leadership
There's more to leadership than driving a team to profit. In fact, there's a word for looking beyond self-interest to prioritize individual growth: servant leadership. Try this course for a quick breakdown of what that is, how it works, and how it can lead to organizational success.
Strategy: Creating Value Inside Your Company
Have you ever wondered why certain companies are more successful than others? The answer is strategy: internal processes that control costs, allocate resources, and create value. This course from GLOBIS Unlimited can give you the tools you need for that strategic edge.
Strategy: Understanding the External Environment
To plan strategy on any level, you need to understand your company's external environment. In fact, your level of understanding can impact hiring, budgeting, marketing, or nearly any other part of the business world. Want to learn how to do all that? This course from GLOBIS Unlimited is the perfect first step!
Using Japanese Values to Thrive in Global Business
Japanese companies have unique cultural, communication, and operational challenges. But they also have values that have led to remarkable longevity. Check out this seminar to hear how these values help earn trust from overseas head offices and develop employees.
Marketing: Reaching Your Target
Every company works hard to get its products into the hands of customers. Are you doing everything you can to compete? In this course, you’ll find a winning formula to turn a product idea into real sales. Follow along through the fundamentals of the marketing mix and see how companies successfully bring products to market.
Basic Accounting: Financial Analysis
Want to compare your performance vs. a competitor? Or evaluate a potential vendor? Then you'll need to conduct a financial analysis. This course will teach you how to use three financial statements and evaluate financial performance in terms of profitability, efficiency, soundness, growth, and overall strength.
Career Anchors
What drives you to be good at your job?
Career anchors are based on your values, desires, motivations, and abilities. They are the immovable parts of your professional self-image that guide you throughout your career journey.
Try this short GLOBIS Unlimited course to identify which of the eight career anchors is yours!
Leadership with Passion through Kokorozashi
The key ingredient to success? Passion.
Finding your kokorozashi will unify your passions and skills to create positive change in society. This GLOBIS Unlimited course will help you develop the values and lifelong goals you need to become a strong, passion-driven leader.
In this series, we’ve established the foundational concepts of assessment quality and explored practical methodologies for item writing. Now, we tackle the most important—and often least visible—component: research and statistical validation. An assessment’s claims to validity and reliability must be proven through solid research.
This final installment of our series outlines the minimum statistical standards HR teams should demand from vendors. If the research fails to meet these criteria, you could be basing high-stakes decisions on fundamentally flawed data.
What Research Was Done to Make and Validate the Assessment?
Psychometric research usually begins with drafting a large pool of questions (plus rating scales and instructions), from which a subset will form the final assessment. The pool is refined through expert review and interviews with the target population. Next, data are collected—under Classical Test Theory (CTT), this involves administering the question pool alongside established assessments of the same, related, and unrelated constructs, as well as behavioral outcomes. Analyses then identify which questions to retain and provide evidence of reliability and validity. This process is often repeated across multiple iterations, refining subpar elements until a high-quality final assessment is achieved.
HR teams should check to see if the following minimum standards for this research have been met. The more standards met, the greater the trust that can be placed in the assessment. HR teams should ask:
Who Was in the Sample?
Large (typically, 300+ participants), diverse samples.
How Many Questions Were Tested?
Best practice is to begin with a large question pool. For example, a high quality assessment containing 30 questions likely was winnowed down from a pool of 300 questions.
What Analyses Were Used to Select Questions?
In Classical Test Theory (CTT), question selection typically involves Structural Equation Modeling (SEM)—at minimum this should involve two steps, both run on separate adequately sized samples: Exploratory Factor Analyses (EFA) and/or Principal Components Analysis (PCA) at step one, followed by Confirmatory Factor Analysis at step two. A widely supported alternative approach to CTT is Item Response Theory (IRT). IRT is sometimes necessary, such as for validating forced-choice assessments which are incompatible with SEM.
What Analyses Were Used to Check Reliability?
Evidence of reliability is gathered using the following analyses:
- Internal consistency reliability: Coefficient alpha, correlations (item-total, intra-item, inter-item), and factor loadings. Note: Coefficient alpha alone is not sufficient to demonstrate reliability because it has numerous limitations (e.g., nearly meaningless with assessments of 40+ questions).
- Test-retest reliability: Intraclass Correlation Coefficient (ICC), Bland-Altman Limits of Agreement (LOA), Pearson’s correlation.
- Inter-rater reliability: ICC.
What Analyses Were Used to Check Validity?
Validity is typically demonstrated by using correlation and regression to see how the assessment relates to or predicts other variables. For example, a psychological safety (PS) assessment should:
- Predict speaking up in meetings
- Correlate negatively with stress and positively with job satisfaction and an existing PS assessment
- High- and low-performing teams should have high and low scores, respectively
- Have no relation to irrelevant variables (e.g., “What device did you use to take the assessment: Mobile or Desktop?”)
Was Data Checked to Ensure Assumptions Were Met?
Most statistical analyses require data to meet a certain set of requirements called assumptions. If data fail to meet these assumptions, then the results of the analysis will be inaccurate and misleading—garbage in, garbage out. These assumptions may at first glance seem to be the kind of perfectionist nit-picking only scientists need worry about—but they are essential. It would be like baking a cake using salt instead of sugar—it’s all just a bunch of white powdery stuff, what’s the big deal?! No data will ever fit all assumptions perfectly, and the negative impact on quality will be proportional to the extent to which the assumption is violated. Instead, the important point is that HR teams would do well to:
- Ensure analyses were used that were appropriate for the data (e.g., avoiding correlation, regression, or factor analysis with data from a forced-choice assessment).
- Enquire about the process used to check and clean the data. For the above analyses, this would include checking for: missing data and its causes, normality (e.g., skew, kurtosis, QQPlots, Box Plots), linearity (e.g., scatterplots), outliers (e.g., mahalanobis distance, extraction communalities), multicollinearity (e.g., condition index, variance proportions), and factorability (e.g., Bartlett’s test of sphericity, Kaiser’s sampling adequacy, correlation and anti-image matrices).
Has the assessment been altered since validity and reliability data were collected?
Evidence of an assessment’s quality only holds when it is used in the exact form it was in during testing. Even minor tweaks—switching from a 7- to a 5-point rating scale, altering a label (“slightly agree” $\rightarrow$ “agree a little”), changing instructions (teammates $\rightarrow$ managers), or rewording a question (“errors” $\rightarrow$ “mistakes”)—can radically affect quality. Some changes are unavoidable, and not all are equally disruptive (e.g., rating scale changes matter far more than swapping wording in one question). But the safest approach is to use assessments in the form in which they were tested, limit modifications to what is absolutely necessary, and reduce trust in results relative to the degree of alterations.
Wrapping Up
Determining the research quality of an assessment is the ultimate safeguard against low-quality tools. HR teams must ensure vendors have conducted robust validation studies using large samples and appropriate statistical techniques like SEM or IRT. Equally important is confirming that the assessment has not been altered since its validation.
View these guidelines not as rigid rules, but as essential questions to gather information. For example, if a vendor offers a forced-choice format but provides evidence-based reasoning for its design, translates complex statistical results into meaningful reports, is transparent about limitations (e.g., not appropriate for direct comparisons), and backs all claims with high-quality evidence, HR teams can adopt it with confidence.
By demanding transparency in research, quality in item design, and adherence to ethical standards, your team can make informed, effective, and confident choices when selecting your employee analytics tools.




