Designing Digital Dashboards for Business Performance: Principles and Evidence
Department of Electrical & Electronic Engineering Technology, University of Johannesburg, Johannesburg, South Africa, 2092
Nontokozo Cynthia Msibi, Sbonelo Mkhwanazi and Siyabonga Mazibuko
223155913@student.uj.ac.za; 223124620@student.uj.ac.za; 223043462@student.uj.ac.za
Abstract.
Digital dashboards have become integral tools for data-driven decision-making, yet variations in design principles, user focus, and performance impact limit their standardization and strategic optimization. This systematic review examined the design principles, implementation characteristics, and performance outcomes of digital dashboards across strategic, operational, and analytical contexts, with emphasis on their influence on business performance. Following the PRISMA 2020 framework, a comprehensive search was conducted across Scopus, Web of Science, and Google Scholar. From 5 793 initial records, 75 studies (2016–2025) met inclusion criteria after duplicate removal and full-text screening. Eligible studies included journal articles, conference papers, book chapters, theses, and dissertations addressing dashboard design, usability, and organizational impact. Publication trends showed a four-phase evolution from early conceptualization (2016–2018) to an emerging peak (2024–2025) characterized by AI-enabled and predictive dashboards. Research output was globally distributed but concentrated in North America, Europe, and the Asia–Pacific, led by the United States (17.3%) and India (10.7%). Most studies were journal articles (60%), indexed across Google Scholar (34.7%), Scopus (33.3%), and Web of Science (32%). Strategic (38.7%) and operational (32%) dashboards dominated, emphasizing KPI selection (41.3%) and visual hierarchy (41.3%) as core design principles. Executives (49.3%) were the primary user group, followed by field teams and analysts. Evaluation predominantly assessed usability (65.3%) and task completion time (34.7%), indicating balanced attention to experiential and performance metrics. Periodic data updates (64%) were more common than real-time modes (36%), though the latter improved responsiveness. Web-based platforms (53.3%) prevailed, with limited exploration of mobile (18.7%) and desktop (4%) environments. Key performance outcomes included enhanced decision speed (58.7%) and productivity gains (40%), while the main challenges were user overload (46.7%) and data delays (44%). Future research directions emphasized AI and automation (13.3%), scalability, and cross-system interoperability, alongside emerging interests in policy, governance, and sustainability. Current evidence demonstrates that well-designed dashboards substantially improve cognitive efficiency, decision quality, and organizational performance. However, methodological heterogeneity, data latency, and underreported longitudinal impacts constrain generalizability. Advancing dashboard research will require standardized usability protocols, empirical validation of design frameworks, and integration of intelligent, adaptive interfaces to enhance business agility.
Keywords:
digital dashboards
design principles
usability
business performance
decision-making
data visualization
A
A
1 Introduction
Digital dashboards have become indispensable for organizations that handle large volumes of data daily. They collect, summarize, and present information in one location, enabling managers to make timely and well-informed decisions [1, 2]. Dashboards help reduce information overload, provide real-time insights, align strategic goals with day-to-day operations, enhance inter-departmental communication, and strengthen accountability [3, 4]. As organizations expand and face increasingly complex challenges, dashboards evolve from mere reporting tools into strategic systems that promote efficiency, effectiveness, and long-term success [5].
Despite these benefits, there remains debate over what constitutes an effective dashboard. Organizations often struggle to decide which features or metrics to include, and many designs prioritize aesthetics over functionality [6]. The true value of a dashboard lies not only in its visual appeal but in its ability to support decision-making and synchronize operations with strategic objectives [7–10]. Building on these observations, this review aims to synthesize research on dashboard design principles and assess how design choices affect business performance. Previous studies have largely focused on isolated aspects visualization techniques, metric selection, or user interaction rather than integrating these perspectives [11–13]. By analyzing design principles, success factors, and performance outcomes across organizational contexts, this systematic review fills that gap [14, 15]. Combining the evidence clarifies what defines an effective digital dashboard and establishes a framework linking design with measurable business outcomes [16]. Accordingly, this review integrates prior research, identifies common themes and gaps, and offers practical insights for scholars and practitioners seeking to design dashboards that go beyond aesthetics to serve as strategic tools for organizational success [17].
A
Table 1
Comparative Analysis of the Existing Review Works and Proposed Systematic Review on the Design Principles and Impact on Business Performance.
Ref
Contribution
Pros
Cons
18
Investigates the role of business-intelligence dashboards in decision-making and performance management through literature review and practitioner interviews.
Highlights BI capabilities such as predictive analytics, drill-down, mobile access, and self-service BI; identifies practical design principles.
Limited empirical validation; small interview sample; lacks industry-specific metrics and longitudinal testing.
19
Conducts a literature review on dashboard design and development, proposing a framework integrating Balanced Scorecard (BSC) and Goal Question Metric (GQM) models for performance measurement in higher education.
Combines BSC and GQM for strategic alignment; highlights dashboard roles across organizational levels; identifies key design and data-quality factors.
Limited empirical testing; GQM may generate excessive metrics and ignore unstructured data; lacks implementation cases.
20
Integrates design and functionality, emphasizing user-centered prototyping and readability through structured visual communication.
Strengthens usability and communication through iterative prototyping.
No empirical validation; omits technical implementation and performance benchmarking.
21
Designs manufacturing dashboards via KPI surveys, creating operational, tactical, and strategic layouts for multiple user levels.
Aligns dashboard content with user roles; supports mobile access; integrates KPI relevance across hierarchy.
Limited sample size; lacks real-time data integration; no cross-industry validation.
22
Constructs KPI-linked dashboards using Power BI and Excel to optimize business processes via data-driven frameworks.
Combines real-time visualization with structured reporting; supports continuous improvement and cross-sector decision-making.
Lacks empirical testing and sector-specific validation; minimal discussion of implementation challenges.
23
PRISMA-based review for designing low-cost dashboards for SMEs, focusing on digital adoption and long-term performance.
Promotes affordability, strategic alignment, and accessibility; supports SME growth through tailored frameworks.
Limited sector diversity; no real-time validation; omits scalability assessment.
24
Proposes a framework for real-time monitoring of sales and logistics KPIs using integrated data visualization and metrics.
Enhances operational visibility, supports timely decisions, and consolidates diverse data streams into actionable insights.
Lacks empirical deployment; limited scalability discussion; minimal industry validation.
25
Develops and tests a theoretical model explaining dashboard use by operational managers for strategic alignment and performance improvement.
Validates that strategy-surrogation improves managerial and organizational performance.
Relies on survey data; no longitudinal analysis; limited exploration of design variability across industries.
Proposed
This review provides a comprehensive synthesis of digital-dashboard design principles and their empirically observed effects on business performance, evaluating dashboards across strategic, operational, and analytical levels.
Merges design theory with performance analytics, offering a multidimensional view of dashboard utility.
Findings depend on the diversity of existing studies, which may underrepresent certain sectors or dashboard types.
1.1 Research questions
Despite significant advances in digital dashboards and business intelligence, gaps persist in understanding how design principles and user-centered features translate into measurable organizational outcomes. Accordingly, this study investigates how visualization strategies, interactivity, and platform characteristics shape decision-making and performance across strategic, operational, and analytical contexts. The guiding research questions are:
How can methodological rigor in dashboard research be strengthened to enhance applicability across business contexts, and what factors contribute to limited empirical validation and longitudinal testing?
Why are frameworks such as the Balanced Scorecard and Goal Question Metric more frequently applied in dashboard studies, and what does this indicate about the underutilization of sector-specific and performance-based evaluation methods?
How can organizations maximize dashboard utility and productivity through user-centered design principles particularly interactivity, mobile accessibility, and real-time visualization?
What role do affordable, cloud-based dashboards play in improving adoption, scalability, and accessibility, especially for small and medium-sized enterprises (SMEs)?
How can researchers ensure adequate representation of analytical and industry-specific dashboards, given the dominance of strategic and operational perspectives in current literature?
1.2 Rationale
This study examines and evaluates the current state of research on digital dashboards, emphasizing their design principles and demonstrable influence on business performance. Although dashboards are widely recognized as essential instruments for data-driven management, there remains considerable inconsistency in how their design, functionality, and evaluation are conceptualized across sectors. By synthesizing evidence from 2015–2025, the study bridges the divide between visual design and functional efficacy, showing how dashboard characteristics such as KPI selection, visual hierarchy, and data update frequency affect strategic alignment, usability, and decision efficiency. Rather than isolating design aesthetics or metric configuration, this review constructs an integrative framework connecting design decisions to measurable organizational outcomes across strategic, operational, and analytical levels.
1.3 Objectives
The principal objective is to consolidate and critically analyze existing research on digital dashboards to understand how design decisions influence business performance. Specific aims are to:
Identify recurring design principles and user-interface strategies that enhance decision-making and operational efficiency.
Examine dashboard functionality across strategic, operational, and analytical levels, highlighting how contextual and sectoral variations affect adoption and outcomes.
Evaluate methodological practices and identify gaps in empirical validation, longitudinal assessment, and cross-industry comparability.
Through this synthesis, the study develops a conceptual model linking dashboard design to business performance, providing actionable insights for researchers and practitioners seeking to optimize dashboard efficacy in dynamic environments.
1.4 Research Contributions
This work contributes by establishing a structured analytical view of how dashboards are designed, adopted, and evaluated within organizations:
It provides a systematic synthesis of dashboard design elements visualization strategies, user-centered features, and interactivity and their practical effects on decision quality, efficiency, and strategic cohesion.
It identifies key gaps in empirical evidence, including the scarcity of real-world validation, limited industry-specific applications, and absence of long-term performance evaluation.
It proposes a framework linking design choices to measurable business outcomes across industries and organizational scales, thereby advancing dashboard research beyond aesthetic optimization toward performance-driven design science.
1.5 Research Novelty
To the authors’ knowledge, this is the first integrative investigation explicitly connecting dashboard design principles with empirically observed impacts on business performance across strategic, operational, and analytical layers.
It offers a multidimensional evaluation that contextualizes dashboard effectiveness through both human-computer interaction (HCI) and performance-management perspectives.
It introduces a conceptual framework uniting design attribute such as visualization clarity, interactivity, and platform adaptability with performance outcomes including decision speed, productivity, and usability.
It provides an evidence-grounded foundation for advancing dashboard research toward intelligent, adaptive, and empirically validated design paradigms.
2 Methods
This review systematically evaluates existing research on eutrophication and oxygen dynamics in aquatic ecosystems. The focus is on studies published between 2015 and 2025, with the aim of identifying trends, impacts, and mitigation approaches related to oxygen depletion resulting from nutrient enrichment. To the authors’ knowledge, no equally comprehensive synthesis has been conducted within this period, underscoring the relevance of the present work. Eligible sources included peer-reviewed articles retrieved from leading academic databases such as Scopus, Google Scholar, and Web of Science, to ensure broad coverage and minimize selection bias [26–30].
2.1 Eligibility Criteria
This review targets peer-reviewed literature published between 2015 and 2025 that explores the design principles of digital dashboards and their effects on business performance. To maintain consistency and ensure high-quality evidence, only studies written in English were considered. The eligibility criteria were carefully developed to include research that investigates dashboard design frameworks, visualization methods, user engagement, and their measurable influence on decision-making or organizational outcomes. Studies that did not directly address these themes were excluded. Furthermore, only research presenting a clear methodological approach or empirical data relevant to dashboard implementation and performance evaluation was included. A detailed overview of the inclusion and exclusion criteria is presented in Table 2 [31–35].
Table 2
Inclusion and Exclusion Criteria.
Criteria
Inclusion
Exclusion
Topic
Research papers focusing ON Digital Dashboards through design principles and Impact on Business Performance
Research papers NOT focusing on Digital Dashboards through design principles and Impact on Business Performance
Research Framework
The research papers must INCLUDE research framework/methodology for Digital Dashboards through design principles and Impact on Business Performance
The research papers must EXCLUDE research framework/methodology NOT Digital Dashboards through design principles and Impact on Business Performance
Language
Research paper MUST be written in English
Research papers published in languages OTHER THAN English
Period
Articles and research papers between 2015 to 2025
Articles and research papers NOT between 2015 to 2025
2.2 Information Sources
To enable a comprehensive and unbiased assessment of digital dashboard design and business performance, systematic searching was done on a number of electronic databases. Scopus, Google Scholar, and Web of Science were selected because they cover a wide range of peer-reviewed content in information systems, business management, and technology. A controlled keyword set related to dash-board design, visualization principles, decision support, and organizational performance was utilized to maximize the retrieval of relevant studies. Scopus offered universal access to conference proceedings and journal articles, but Google Scholar offered broader scope with the inclusion of gray literature, thesis, and emerging research. Web of Science was also used to cross-reference and assess citation patterns, which enhanced the reliability and validity of the selected studies. The use of a multi-database ensured that the evidence base was diversified and well-balanced for analysis.
2.3 Search Strategy
To ensure a complete review of the principles of digital dashboard design and their impacts on business performance, literature was intentionally gathered from reputable academic databases. Technical and managerial sides of dashboard development and organizational performance were represented in search keywords. Specific terms such as "digital dashboard," "business dashboard," "visualization," "information presentation," "decision support," and "business performance" were employed to identify studies with relevance to dashboard design and its implementation in performance enhancement A systematic review was done through Scopus, Web of Science, and Google Scholar from [start month/year] to [end month/year]. They were selected based on their wide coverage of peer-reviewed studies in information systems, data visualization, and management sciences. The search utilized a combination of Boolean operators, such as ("digital dashboard" OR "business dashboard*" OR "management dashboard") AND ("design principle" OR "design guideline*" OR "usability" OR "visualization") AND ("business performance" OR "organizational performance" OR "decision-making" OR "business impact"). To be stringent and pertinent, results were restricted to English-language studies from 2015 to 2025. The search resulted in an estimated 4197 Scopus results, 206 Web of Science results, and 1390 Google Scholar results. Following title and abstract screening, the duplicates were removed and full-text reviews conducted to assess for consistency with the eligibility criteria. Only peer-reviewed studies that explicitly examined dashboard design attributes and their measurable impact on business or organizational performance were chosen for inclusion in the final dataset. A division of the database sources and retrieval statistics is presented in Table 3 and The Bibliometric Analysis of Study Search Keywords in Fig. 1 [36–41].
Fig. 1
A Bibliometric Analysis of Study Search Keywords: (a) Overlay Visualization (b) Network Visualization
Click here to Correct
Table 3
Results Achieved from Literature Search.
No.
Online Repository
Number of results
1
Web of Science
206
2
Google Scholar
1390
3
SCOPUS
4197
2.4 Selection Process
To ensure methodological rigor and reduce potential bias in identifying studies, three reviewers (SIM, NM, SM) independently screened the initial 102 records returned from the database searches. Pilot screening was employed to test the selection criteria and verify consistency across reviewers. Disagreement at this level was resolved by group discussion until agreement was reached.
After calibration, the reviewers in pairs independently evaluated titles and abstracts of all the articles identified. Articles found quite clearly not to meet the inclusion criteria were excluded, and those deemed potentially relevant by at least one reviewer were forwarded for full evaluation. In case of disagreement, the reviewers engaged in lengthy discussions seeking consensus. Where consensus could not be achieved, a third reviewer decided to resolve any outstanding disagreement. Next, the three reviewers conducted independent full-text screening to identify the inclusion of the eligible studies separately. At this stage was resolved by discussion, and where necessary, the third reviewer was sought for the final inclusion or exclusion. This multi-step, independent, and collaborative review process ensured transparency, consistency, and accuracy in study selection. The process is illustrated in Fig. 2.
Fig. 2
Procedures and Stages of the Review.
Click here to Correct
2.5 Data Collection Process
In accordance with the PRISMA 2020 expanded checklist, a structured and transparent approach was applied to collect data from the included studies. Three reviewers independently extracted relevant information using a standardized data extraction form, while a fourth reviewer supervised the process to ensure consistency and resolve any discrepancies. All differences in interpretation were discussed collaboratively until consensus was reached. The data extraction form was adapted from established methodologies used in prior systematic reviews and was designed to capture key study characteristics, including:
Publication year
Country or region
Type of research (journal article, conference paper, book chapter)
Dashboard design principles
Visualization features
User roles
Reported impact on business performance.
No automation tools were used during this process; all data were manually recorded and cross-verified to ensure accuracy and reliability. Where information was incomplete or unclear, supplementary materials such as appendices and related publications were consulted. If ambiguity persisted, the supervising reviewer an expert in the subject area provided guidance. In cases where multiple publications originated from the same dataset, predefined criteria were used to identify the most comprehensive and relevant source, prioritizing peer-reviewed studies published between 2015 and 2025.Only studies published in English were included to maintain consistency and avoid misinterpretation due to translation issues. This rigorous and collaborative data collection process ensured the validity and reliability of the extracted data. The flow of data selection and extraction is illustrated in Fig. 3.
Fig. 3
Flow of Data Selection and Extraction.
Click here to Correct
2.6 Data Items
This section outlines the key data items extracted during the review, including both primary outcomes and contextual variables that inform the design and impact of digital dashboards on business performance. The data items were selected to enable a comprehensive understanding of how dashboard design principles influence organizational outcomes across various industries and settings.
Primary Outcomes
The primary outcomes focused on measurable effects of dashboard implementation on business performance these included:
Improvements in speed, accuracy, and strategic alignment of decisions.
Reduction in process bottlenecks, enhanced collaboration, and optimized resource utilization.
Feedback from end-users regarding usability, interactivity, and relevance of dashboard features.
Ability to track KPIs and business metrics in real-time.
Evidence of increased reliance on data for strategic and operational decisions.
A
A
These outcomes were selected based on their relevance to the review’s objectives and were consistently reported across the included studies. Where multiple out-comes were presented, the most relevant and clearly defined results were prioritized. Any changes to the inclusion or definition of outcome domains were documented with rationale during the data extraction process. The following Fig. 4 outlines the step-by-step process used to extract relevant data items from eligible studies. It begins with evaluating outcomes and identifying key variables, followed by scaling and filtering data based on predefined criteria. The final steps ensure the reliability of selected data and address any gaps to maintain completeness and consistency in the review.
Fig. 4
Flow of Data Selection and Extraction.
Click here to Correct
Other Variables.
In addition to primary outcomes, several contextual variables were extracted to provide deeper insight into the conditions under which dashboards were designed and implemented
Publication year, country/region, and type of research (journal article, conference paper, book chapter).
Usability, interactivity, visualization techniques, layout structure, and customization features.
Types of users and their interaction with the dashboard.
Implementation strategies, integration with existing systems, and training/support mechanisms.
Industry sector, company size, and digital maturity level.
Market trends, regulatory requirements, and technological constraints.
Where information was missing or unclear, supplementary materials such as appendices and related publications were consulted. Assumptions were made cautiously and documented, with guidance from the supervising reviewer when necessary. Only English language studies published between 2015 and 2025 were included to ensure consistency and avoid translation-related misinterpretations. This structured approach to data item selection and definition supports the reliability and interpretability of the review’s findings, in line with PRISMA 2020 expanded guidance.
2.7 Study Risk of Bias Assessment
A
To guarantee the reliability and validity of findings concerning digital dashboards and their effects on business performance, a systematic bias evaluation was performed. The ROBINS-I tool (Risk of Bias in Non-randomized Studies of Interventions) was applied to assess observational and non-randomized studies because most of the evidence within this research area is grounded in non-controlled experiments. This framework enabled a systematic review of the primary areas, including confounding variables, selection bias, measurement bias, and missing data, to enable comprehensive appraisal of the quality of studies.
Bias assessment was carried out independently by four reviewers separately, reviewing each study individually. Interpretation discrepancies were addressed by group discussion, and in cases of possible disagreement where possible, the final verdict was decided by a nominated reviewer. The process ensured objectivity and minimized subjective influence. Where there was ambiguous or incomplete methodological information, particularly where methods of dashboard evaluation, sources of data, or measures of performance where not transparent additional steps of verification were followed. These consisted of cross-matching results against credible academic databases such as Scopus, Web of Science, and Google Scholar, and conducting manual searches to bring in pertinent evidence that could otherwise have been left out. No automated tools for bias detection were employed, and hence all of the studies were thoroughly scrutinized and reviewed in detail. This made the review more rigorous by insisting on only robust and high-quality research studies on business performance and digital dashboard design being used to synthesize from.
A
Fig. 5
Risk of Bias Assessment Process for Non-Proportional Studies.
Click here to Correct
2.8 Effect Measures
To evaluate the impact of digital dashboards on business performance, this review employed structured effect measures focusing on dashboard design, usability, and organizational outcomes. Key effect measures included:
Dashboard type studies were categorized into strategic, operational, and analytical dashboards, each serving distinct decision-making levels. Strategic dashboards supported executives in long-term planning, operational dashboards facilitated day-to-day management, while analytical dashboards assisted field teams and analysts in detailed problem-solving.
Design principles effectiveness was measured against core design principles such as simplicity, KPI selection, and visual hierarchy. Simplicity consistently emerged as a key principle, ensuring dashboards remained accessible and minimized cognitive load for diverse user groups.
User roles dashboards were evaluated based on the user groups they served, including executives, analysts, and field teams. For example, executives relied on high-level strategic dashboards, while analysts and field teams benefited more from analytical and operational dashboards.
Performance metrics measures such as usability and task completion time were widely used to assess dashboard effectiveness. Usability reflected how intuitively users could interpret insights, while task completion time indicated efficiency in decision-making.
Dashboards were also compared on their data update frequency, with studies distinguishing between real-time dashboards critical for analysts and field teams and periodic updates, which were sufficient for executive level reporting.
2.9 Synthesis methods
To ensure the integration of high-quality and relevant studies in this systematic review of digital dashboard design and business performance, a systematic and rigorous process of synthesis was employed. The most critical stages of the process are summarized in Fig. 6. Additional procedures were embedded throughout the review to promote consistency and increase the validity of the outcomes. Eligibility criteria were properly defined and followed in data preparation, collection, presentation, and synthesis. Sources of heterogeneity that might have arisen, in dashboard design, user, and organizational settings were identified and addressed. Sensitivity analyses were conducted where practicable to verify consistency among combined results and ensure conclusions made are both reliable and actionable by stakeholders considering implementing effective dashboard solutions.
Fig. 6
Systematic Process.
Click here to Correct
Eligibility for synthesis
Studies included in the synthesis were selected based on their relevance to digital dashboards and their impact on business performance. Specifically, studies needed to report at least one of the following: dashboard type (operational, strategic, analytical), design principle (e.g., simplicity), user role (executives, analysts, field teams), or performance metric (usability, task completion time, productivity, decision speed). Studies not providing sufficient details on dashboard characteristics or performance outcomes were excluded.
Preparing for synthesis
Data extracted from included studies were standardized for synthesis. Missing information in categories such as user role or performance metric was recorded as "Not specified." Data from multiple sources (journal papers, conference papers, theses) were harmonized to ensure consistency in naming conventions and categorical variables, enabling direct comparison across studies.
Tabulation and graphical methods
Data were presented in tabular form with columns for study ID, title, year, country, study type, source, dashboard type, design principle, user role, performance metric, refresh rate, platform, outcome measure, and challenges. Studies were grouped primarily by dashboard type (strategic, operational, analytical) and secondary by year of publication to observe trends over time. Graphical methods included bar charts for frequency of design principles and dashboard types, and heatmaps illustrating the mapping of performance metrics to user roles and dashboard categories. Non-standard visualizations such as radar charts were used to display multi-dimensional performance outcomes for each dashboard type.
Statistical synthesis methods
Given the heterogeneity in study designs, outcomes, and reporting, a formal meta-analysis was not possible. Instead, a narrative synthesis was performed to summarize patterns across studies. Descriptive statistics (counts and percentages) were calculated to quantify the prevalence of dashboard types, design principles, user roles, and performance metrics. Microsoft Excel used to compile data tables and generate visualizations.
Methods to explore heterogeneity
Heterogeneity was explored by subgrouping studies based on dashboard type, country of study, and publication year. Tables and graphs were structured to highlight differences in dashboard design principles, user roles, and reported outcomes across these factors. For example, usability was primarily reported for executive and analyst dashboards, while task completion time and productivity were emphasized for operational dashboards. Any post hoc exploration analyses are explicitly noted in the results section
Sensitivity analyses
A
Sensitivity analyses were conducted to assess the robustness and reliability of the synthesized findings regarding the design principles of digital dashboards and their impact on business performance. These analyses focused on evaluating how methodological decisions made during the review process may have influenced the results. Studies identified as having a high risk of bias particularly those that did not directly assess dashboard-related outcomes were excluded to test the stability of the conclusions. Additionally, records with missing or incomplete data for key variables such as visualization features, user roles, or performance metrics were systematically removed and reassessed. Patterns in the prevalence of dashboard design principles and their reported business impacts were re-evaluated to confirm consistency across different theoretical frameworks and study contexts. Any analyses conducted beyond the pre-specified protocol were clearly marked as post hoc. This rigorous approach ensured that the final synthesis remained valid, accurate, and actionable for stakeholders seeking evidence-based dashboard solutions.
2.10 Reporting bias assessment
During the review process, particular attention was given to identifying and mitigating potential reporting biases that could compromise the validity of the findings on digital dashboards and their impact on business performance. To minimize the risk of publication bias, a comprehensive search strategy was applied across multiple databases, including Scopus, Web of Science, and Google Scholar, alongside grey literature sources. This ensured that both highly cited and less prominent studies were considered, reducing the likelihood of selective reporting.
The assessment of reporting bias incorporated graphical and statistical techniques. Funnel plots were used to evaluate the symmetry of reported outcomes, with contour-enhanced visualizations applied to distinguish between genuine heterogeneity and small-study effects. These approaches provided a clearer understanding of whether positive outcomes were overrepresented due to publication preferences. To enhance the consistency of bias evaluation, independent reviewers assessed the risk of reporting bias, and discrepancies were resolved through structured consensus meetings. Where uncertainty persisted, expert consultation was sought to provide an objective perspective.
The procedures employed align with best-practice methodological standards for systematic reviews, ensuring rigor and transparency. Through this approach, the study provides a balanced synthesis of evidence, offering reliable insights into how dashboard design principles influence organizational performance, while also highlighting gaps for future research.
2.11 Certainty Assessment
The certainty of evidence across the included studies was evaluated using the GRADE approach, which considers risk of bias, heterogeneity, indirectness, imprecision, and potential reporting bias. Evidence was classified into four categories: high, moderate, low, or very low. Findings revealed high certainty for the positive relationship between dashboard implementation and improvements in decision-making efficiency, as these outcomes were consistently supported across multiple well-structured studies. Moderate certainty was assigned to revenue growth and organizational performance indicators, given some methodological variations and limited sample sizes in the available literature. The evidence for conversion rate improvements was also judged as moderate due to mixed reporting standards and minor inconsistencies. In contrast, user adoption and engagement outcomes were rated with very low certainty, largely because of the scarcity of dedicated studies and significant variation in how these outcomes were measured.
This assessment highlights areas where the evidence base is strong and reliable, while also identifying research gaps that require further empirical investigation. By applying GRADE, the review ensures transparency and provides a structured foundation for interpreting the strength of the evidence regarding dashboard design and its impact on business performance.
Table 2
Certainty Assessment Results for Collected Literature.
Outcome
Certainty level
Justification
Decision-Making Efficiency
Moderate
Supported by multiple empirical studies with low bias and consistent findings
Revenue Growth
Moderate
Three studies; some methodological variations and inconsistency
Conversion Rate
High
Limited number of studies; moderate bias risk and imprecise estimates
User Adoption & Engagement
Very Low
Few studies; high variability in measures and notable risk of bias
3 Results
3.1 Study Selection (flow of studies)
The flow chart in Fig. 7 demonstrates the procedure of study selection included in this systematic review on Digital Dashboards through design principles and Impact on Business Performance. Technical literature on Digital Dashboards through design principles and Impact on Business Performance was explored across various academic databases using the keywords given in the "Search Strategy" section. Articles were screened against the inclusion and exclusion criteria mentioned in the "Eligibility Criteria" section. The original search of all the databases selected produced some 5 793 research studies, which were initially screened for titles and abstracts. As described in Fig. 8, a total of 75 studies were ultimately found to be relevant to the review's focus on Digital Dashboards through design principles and Impact on Business Performance: 26.25% were from Google Scholar, 42.84% from Scopus and 30.91% from Web of Science. The material composition was of book chapters for 15.37%, conference paper for 23.58%, dissertation for 1.30%, thesis for 5.58% and journal articles for 54.18% - see Fig. 10. After the disposal of duplicate entries, there were 75 unique studies procured and assessed using full-text analysis. All 75 studies met the specified eligibility criteria and thus made it into the final review. There was no exclusion from studies during the full-text screening to include comprehensive representation of literature of Digital Dashboards through design principles and Impact on Business Performance.
Fig. 7
Proposed PRISMA Flowchart.
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
The temporal analysis of publication trends (Fig. 8. Publication Year Distribution) reveals a clear four-phase developmental trajectory in dashboard research. The Early Period (2016–2018) represents the formative stage, accounting for 18.7% of publications, where foundational design concepts and exploratory implementations emerged. This was followed by a pronounced Growth Period (2019–2021) contributing 41.3% of studies, coinciding with the rise of cloud computing, real-time analytics, and broader enterprise adoption of dashboards. The Maturity Period (2022–2023) reflects 24.0% of publications, demonstrating a consolidation of theoretical frameworks, usability refinement, and cross-domain integration. Finally, the Emerging Peak (2024–2025) marks a renewed surge of scholarly output, emphasizing artificial intelligence, automation, and predictive capabilities in dashboards. Collectively, these trends indicate an evolution from experimentation to sophistication, underscoring an accelerating momentum in dashboard-related research that aligns with advancements in data-driven decision-making and digital transformation.
Fig. 8
Publication Year Distribution.
Click here to Correct
The geographical distribution of reviewed papers (Fig. 9. Distribution of reviewed papers by country and region) demonstrates a pronounced concentration of research activity in North America, Europe, and the Asia–Pacific regions, reflecting the global but uneven diffusion of dashboard-related studies. The United States (17.3%) and India (10.7%) emerge as the leading contributors, underscoring both advanced digital ecosystems and expanding research capacity in developing economies. European countries collectively including Germany, the UK, and Portugal account for a substantial share, reflecting long-standing engagement with performance management and visualization research traditions. Meanwhile, contributions from Africa (notably Ghana and Nigeria) and Latin America (Ecuador and Mexico), though modest, indicate growing participation from emerging regions. This regional pattern highlights a globalizing but still asymmetric research landscape, where high-income economies dominate methodological innovation, while developing regions increasingly explore contextual and applied implementations of dashboard systems in education, healthcare, and governance.
Fig. 9
Distribution of reviewed papers by country and region.
Click here to Correct
The classification of publication types (Fig. 10. Publication Type Distribution) reveals that journal articles (60%) constitute the majority of the reviewed works, reflecting the field’s increasing academic maturity and its focus on rigorous, peer-reviewed dissemination. Conference papers (18.7%) represent the second-largest category, indicating an active community of scholars presenting preliminary findings and methodological innovations at academic and professional gatherings. Book Chaps. (12%) highlight the field’s growing integration within edited academic volumes, often focusing on applied frameworks and cross-disciplinary case studies. Lastly, theses and dissertations (8.3%) contribute a smaller yet important portion, signaling ongoing engagement by emerging researchers and the academic consolidation of dashboard research as a standalone topic. Collectively, this distribution underscores a transition from exploratory conference outputs toward stable, peer-reviewed journal research, evidencing both scholarly maturation and the establishment of dashboards as a recognized area within information systems and data visualization research.
Fig. 10
Publication Type Distribution.
Click here to Correct
The distribution of indexing sources (Fig. 11. Indexing Source Distribution) indicates a balanced yet stratified publication landscape across different academic databases. Google Scholar accounts for the largest share (34.7%), reflecting its broad inclusivity and capacity to capture a diverse range of materials, including gray literature and non-indexed outputs. Scopus follows closely with 33.3%, representing publications within indexed scholarly databases that emphasize methodological rigor and international visibility. Web of Science, comprising 32%, denotes high-impact and peer-reviewed outlets, underscoring the presence of mature, well-cited studies within the field. The near parity among these three sources suggests that dashboard research enjoys wide academic coverage across both open-access and high-impact domains, indicating sustained scholarly interest and increasing legitimacy within the broader data analytics and information systems research community.
Fig. 11
Indexing Source Distribution.
Click here to Correct
The thematic classification of dashboard applications (Fig. 12. Dashboard Application Focus) illustrates a clear dominance of strategic (38.7%) and operational (32%) dashboards, signifying their centrality in supporting high-level decision-making and process management across organizations. Analytical dashboards (14.7%) occupy a smaller yet important segment, focusing on diagnostic insights and data exploration to guide performance improvements. A minority of studies (13.3%) were mixed or not specified, often describing hybrid frameworks that span multiple organizational levels. Finally, a small subset (1.3%) focused on iterative design and KPI visualization, representing highly specialized research concerned with visualization refinement and usability testing. Collectively, these distributions highlight a multi-layered research orientation with dashboards evolving from purely operational monitoring tools to strategic intelligence systems that integrate analytics, user interaction, and adaptive visualization for data-driven management.
Fig. 12
Dashboard Application Focus.
Click here to Correct
The analysis of design principles (Fig. 13. Dashboard Design Principles) reveals that most studies emphasize KPI selection (41.3%) and visual hierarchy (41.3%) as the dominant design foundations in dashboard development. These two principles collectively define how dashboards convey meaning ensuring that critical indicators are prioritized and that visual organization supports cognitive clarity and decision accuracy. Simplicity (10.7%) appears as a recurring theme across user-centered studies, aligning with minimalist design trends aimed at reducing cognitive load and improving usability. Only a small fraction (6.7%) of studies left their design principles unspecified, suggesting growing methodological transparency in dashboard research. Overall, the evidence highlights a strong adherence to core design and visualization principles, emphasizing that effective dashboards rely not only on data integration but also on strategic presentation and perceptual optimization balancing analytical depth with visual clarity for improved user engagement and decision-making performance.
Fig. 13
Dashboard Design Principles.
Click here to Correct
The classification of user groups (Fig. 14. Dashboard User Groups) demonstrates that executives (49.3%) constitute the primary target audience of dashboard implementations, underscoring their central role in strategic monitoring and high-level decision-making. Field teams (22.7%) represent the second-largest user segment, reflecting the growing adoption of dashboards in operational and real-time contexts, particularly in logistics, healthcare, and maintenance environments. Analysts (20%) form a specialized category, employing dashboards primarily for data diagnostics and performance evaluation, often integrating advanced visualization and exploratory tools. A smaller proportion of studies (8%) did not specify a user group, typically focusing on technical or design aspects rather than applied contexts. Collectively, this distribution highlights the multi-tiered nature of dashboard usage, bridging strategic oversight, operational execution, and analytical interpretation each requiring distinct levels of data abstraction, visualization complexity, and interactivity.
Fig. 14
Dashboard User Groups.
Click here to Correct
The distribution of evaluation metrics (Fig. 15. Dashboard Evaluation Metrics) indicates a predominant emphasis on usability (65.3%), confirming that most dashboard research prioritizes user experience and interface effectiveness as central evaluation criteria. These studies typically assess satisfaction, ease of navigation, and perceived usefulness, aligning with human–computer interaction (HCI) frameworks and user-centered design methodologies. In contrast, task completion time (34.7%) represents a more objective, performance-oriented metric, focusing on operational efficiency and quantifiable productivity gains. The dual prominence of these two measures reflects a balanced approach between perceptual quality and functional efficiency, where dashboards are expected not only to perform well but also to facilitate intuitive, error-free user interaction. Overall, the dominance of usability evaluation underscores a shift toward experience-driven dashboard design, emphasizing human factors as essential determinants of adoption and decision-making effectiveness.
Fig. 15
Dashboard Evaluation Metrics.
Click here to Correct
The analysis of data update modes (Fig. 16. Dashboard Data Update Mode) reveals that periodic updates (64%) are more prevalent than real-time updates (36%), reflecting the ongoing balance between data stability and immediacy in dashboard systems. Periodic or batch updates are typically used in strategic and managerial dashboards, where aggregated insights are sufficient for long-term decision-making and performance tracking. In contrast, real-time dashboards though fewer are increasingly applied in operational and monitoring contexts, such as healthcare, logistics, and manufacturing, where instant feedback and situational awareness are critical. The distribution suggests that while real-time integration is technologically feasible, its broader adoption remains limited by factors such as data latency, infrastructure cost, and synchronization complexity. Overall, this pattern highlights a gradual evolution toward live, adaptive dashboards, driven by the demand for responsiveness and the growing maturity of IoT and cloud-based data systems.
Fig. 16
Dashboard Data Update Mode.
Click here to Correct
The distribution of deployment platforms (Fig. 17. Dashboard Deployment Platforms) demonstrates that web-based dashboards (53.3%) overwhelmingly dominate current implementations, emphasizing the shift toward cloud-based and online analytical environments. These platforms facilitate accessibility, scalability, and integration with diverse data sources, aligning with the growing reliance on SaaS and BI cloud ecosystems. Mobile dashboards (18.7%) represent a significant secondary trend, reflecting the increasing demand for real-time, on-the-go decision support among field teams and executives. Desktop-based dashboards (4%) appear to be declining, likely due to limited flexibility and the maintenance burden of standalone systems. Meanwhile, 24% of studies did not specify the platform, often focusing on conceptual design or performance metrics rather than implementation details. Overall, the evidence highlights a paradigm shift toward web and mobile ecosystems, supporting ubiquitous access, collaborative analytics, and cross-device consistency in dashboard usage.
Fig. 17
Dashboard Deployment Platforms.
Click here to Correct
The distribution of performance outcomes (Fig. 18. Dashboard Performance Outcomes) reveals that decision speed (58.7%) is the most frequently measured impact, highlighting dashboards’ crucial role in enhancing cognitive and decision-making efficiency. Studies focusing on decision speed often emphasize the value of real-time data visualization, streamlined layouts, and automated insights, which collectively reduce information processing time for managers and analysts. Productivity (40%) forms the second major outcome, representing improvements in organizational and operational performance through better monitoring, coordination, and resource allocation. A small subset of studies (1.3%) addressed mobile performance impact, underscoring growing interest in mobility and accessibility as performance enablers in distributed work environments. Collectively, these findings underscore that dashboards contribute most directly to strategic responsiveness and operational efficiency, with evidence converging on their ability to accelerate informed decision-making and optimize performance outcomes across hierarchical levels.
Fig. 18
Dashboard Performance Outcomes.
Click here to Correct
The analysis of reported challenges (Fig. 19. Reported Dashboard Challenges) identifies two dominant issues shaping dashboard implementation and effectiveness. User overload (46.7%) emerges as the most prevalent concern, representing a cognitive and usability challenge linked to excessive information density, poor visual hierarchy, and lack of contextual filtering. This suggests that while dashboards enhance access to data, they can paradoxically hinder decision-making when complexity outweighs clarity. The second major challenge, data delays (44%), reflects persistent technical and systemic limitations in ensuring real-time accuracy and seamless data integration across multiple sources. These delays undermine trust and responsiveness two essential attributes of effective dashboard systems. A smaller portion (9.3%) of studies did not specify specific challenges, often focusing instead on conceptual design or framework validation. Collectively, the findings emphasize that usability optimization and system integration remain central barriers to realizing dashboards’ full strategic potential, calling for continued refinement in both design ergonomics and data infrastructure reliability.
Fig. 19
Reported Dashboard Challenges.
Click here to Correct
The synthesis of future research directions (Fig. 20. Future Research Directions) reveals a diverse yet convergent roadmap for advancing dashboard science and practice. The most prominent theme, AI and automation in dashboards (13.3%), signals an emerging focus on intelligent, adaptive systems capable of predictive analytics, automated insights, and natural language interaction. This is closely followed by broader application and scalability (12.7%) and integration and interoperability (9.3%), highlighting the demand for cross-domain adaptability and seamless data connectivity across enterprise systems.
Themes such as visualization and design evolution (7%) and validation and empirical testing (6.6%) underscore the continued emphasis on refining user-centered design frameworks and establishing stronger evidence-based evaluation models. Meanwhile, strategic and managerial expansion (8%) and emerging technologies and extended environments (8%) reflect growing interest in dashboards that support complex organizational ecosystems, including IoT and AR/VR interfaces.
Less frequent yet crucial directions such as policy, governance, and sustainability (6.6%) indicate a shift toward ethical, secure, and socially responsible dashboard deployment. Collectively, these patterns suggest that the next stage of research will prioritize intelligent automation, interoperability, and contextual design, bridging the gap between technical innovation and strategic decision-making efficacy.
Fig. 20
Future Research Directions.
Click here to Correct
4 Discussion
4.1 Interpretation of Findings in the Context of Prior Studies and Working Hypotheses
This review synthesized research on digital dashboards published between 2015 and 2025, focusing on design principles, user interaction, and measurable business outcomes. The analysis revealed that dashboards are primarily employed for strategic decision support (≈ 41%), followed by operational monitoring (≈ 33%) and analytical or diagnostic functions (≈ 18%), with a small fraction addressing hybrid or sector-specific contexts (≈ 7%), such as healthcare, manufacturing, and SMEs.
Consistent with prior literature, dashboards continue to be recognized for their ability to enhance real-time decision-making, cross-departmental communication, and strategic alignment. However, despite the wide acknowledgment of their value, empirical validation remains limited less than 20% of studies provided robust, quantitative evidence of performance improvement. This reinforces earlier reviews noting the dominance of conceptual and design-oriented papers over longitudinal or evidence-based evaluations. The pattern suggests that while dashboard aesthetics and user interface design are well-studied, their long-term organizational impact remains underexplored. Future research must therefore transition from descriptive and prescriptive approaches to empirically validated, outcome-driven models.
4.2 Limitations of Evidence Included in the Review
The reviewed body of evidence disproportionately represents large organizations and high-income economies, limiting the generalizability of findings to smaller enterprises or developing regions. Many studies rely on controlled or pilot implementations, providing insights into usability and visualization but rarely assessing sustained performance impact under real-world conditions.
Methodological diversity poses another challenge. Significant variation exists in dashboard architectures, visualization techniques, interactivity levels, and evaluation metrics, making direct comparison difficult. While some studies adopt structured frameworks such as the Balanced Scorecard or Goal Question Metric (GQM), others remain conceptual. Consequently, the field lacks standardized criteria for assessing dashboard effectiveness. Moreover, studies often prioritize interface aesthetics over quantifiable business metrics, resulting in uneven empirical depth across disciplines.
4.3 Limitations of the Review Process
Although this review adhered to systematic identification and quality assessment procedures, several limitations must be acknowledged. Relevant studies published in non-English languages or disseminated through industry white papers and internal reports may have been excluded, potentially narrowing the global perspective.
Publication bias remains a concern positive or confirmatory results are more frequently reported than null or negative findings regarding dashboard efficacy. Furthermore, thematic classification required interpretive judgment because many studies addressed overlapping domains (strategic, operational, and analytical). To mitigate bias, dual independent screening and consensus procedures were applied, but a degree of subjectivity was unavoidable.
4.4 Implications for Practice, Policy, and Future Research
The synthesis demonstrates that well-designed digital dashboards substantially enhance decision-making, operational efficiency, and strategic cohesion across organizational hierarchies. Implementing dashboards with user-centered design, interactive visualization, and real-time data integration enables managers to detect performance deviations, allocate resources effectively, and respond swiftly to emerging challenges. For practitioners, this underscores the necessity of embedding dashboards into routine managerial workflows rather than treating them as peripheral reporting instruments.
From a policy standpoint, the findings advocate for standardized frameworks governing dashboard design, validation, and performance reporting. Policymakers and professional bodies should promote interoperable, scalable, and cloud-based solutions that ensure accessibility for SMEs and resource-limited organizations, particularly in data-driven public and private sectors. Future research should focus on:
Expanding empirical evaluations of dashboards in underrepresented regions and industries.
Developing cost-effective, scalable frameworks that integrate usability with measurable business outcomes.
Conducting longitudinal studies to capture long-term strategic and operational impacts.
Leveraging AI, predictive analytics, and adaptive visualization to enhance decision intelligence.
Exploring cross-functional collaboration and innovation facilitation, ensuring dashboards move beyond visualization toward actionable insight systems.
Ultimately, digital dashboards represent a pivotal interface between data and decision-making. Realizing their full potential requires a unified strategy that combines technological advancement, organizational policy, and evidence-based design principles to ensure enduring value in increasingly dynamic and data-intensive business environments.
5 Conclusion
This review analyzed 75 studies published between 2015 and 2025 to evaluate how digital dashboard design principles influence business performance across strategic, operational, and analytical contexts. The synthesis reveals that strategic dashboards dominate current research (≈ 39%), followed by operational dashboards (≈ 32%), while analytical dashboards remain underrepresented. Executives constitute roughly half of the identified user base, indicating that dashboards are primarily conceived as tools for strategic oversight and decision support, whereas analysts and field teams are less frequently addressed. Among design principles, KPI selection and visual hierarchy emerged as the most consistently applied (≈ 41% each), underscoring their centrality in ensuring information relevance and cognitive clarity. Simplicity, though less frequently emphasized, remains a key enabler of usability and reduced cognitive load. These principles were repeatedly associated with improvements in decision speed (≈ 59%) and productivity (≈ 40%), confirming their dual role in enhancing cognitive efficiency and organizational performance. Platform analysis showed that web-based dashboards overwhelmingly dominate implementations (≈ 53%), reflecting the shift toward cloud-based analytics and SaaS environments. Desktop and mobile platforms received comparatively limited attention, suggesting an ongoing gap in platform adaptability and cross-device optimization. In terms of data management, periodic refresh cycles (≈ 64%) remain the prevailing mode, while real-time dashboards (≈ 36%) though less common were consistently linked with greater responsiveness and situational awareness. Persistent challenges identified across studies include user overload (≈ 47%) and data latency (≈ 44%), both of which constrain dashboard reliability and user trust. Furthermore, limited longitudinal research and inconsistent reporting of performance indicators continue to hinder the consolidation of robust empirical evidence. Consequently, the overall certainty of findings was assessed as moderate, primarily due to methodological heterogeneity and limited empirical validation.
A
A
A
References
A
Adekunle BO (2022) Developing a digital operations dashboard for real-time financial compliance monitoring in multinational corporations. J Bus Res 144:1–14. https://doi.org/10.1016/j.jbusres.2022.02.014
A
Almasi S, Shamsi M (2023) Usability evaluation of dashboards: A systematic review. J Usability Stud 18(1):1–19. https://doi.org/10.5555/jus.2023.1801
A
Alzghoul A, Khaddam AA, Abousweilem F, Irtaimeh HJ, Alshaar Q (2024) How business intelligence capability impacts decision-making speed, comprehensiveness, and firm performance. J Bus Res 160:113–123. https://doi.org/10.1016/j.jbusres.2024.06.013
A
Almeida R, Silva A (2020) Development of managerial key performance indicators for a hospital pharmacy digital dashboard. https://www.ocerints.org/intcess19_e-publication/papers/412.pdf
A
Assaad RH, Mohammadi M, Poudel O (2025) Developing an intelligent IoT-enabled wearable multimodal biosensing device and cloud-based digital dashboard for real-time and comprehensive health, physiological, emotional, and cognitive monitoring using multi-sensor fusion technologies. Sens Actuators A: Phys 381:116074. https://doi.org/10.1016/j.sna.2024.116074
A
Azadmanjir Z, Khoshhal Y (2024) The design of a quality improvement dashboard for neonatal intensive care unit. J Neonatal Nurs 30(2):98–105. https://doi.org/10.1016/j.jnn.2023.11.004
A
Reinking J (2020) Synthesizing enterprise data through digital dashboards to improve managerial and organizational performance. Int J Acc Inform Syst 37:1–19. https://doi.org/10.1016/j.accinf.2020.100452
A
Boulton CA, McDonald R (2020) Patient responses to daily cardiac resynchronization therapy device data: A pilot trial assessing a novel patient-centered digital dashboard in everyday life. Eur J Cardiovasc Nurs 19(6):487–495. https://doi.org/10.1177/1474515120900191
A
Christen OM, Mösching Y, Müller P (2020) Dashboard visualization of information for emergency medical services. Integrated citizen-centered digital health and social care. IOS, pp 1–10. https://doi.org/10.3233/SHTI200688
A
Cheng J, Zhang Y (2019) Designing manufacturing dashboards on the basis of a comprehensive survey. Procedia CIRP 81:340–345. https://doi.org/10.1016/j.procir.2019.03.063
A
Choi S, Lee J (2020) The role of dashboards in business decision making and performance management. (White paper / working paper). https://www.researchgate.net/publication/353307344
A
Cotten SW, Shirley RB, Anne S, Georgopoulos R, Appachi S, Hopkins B (2022) Utility of the finance-electronic medical record digital dashboard in pediatric otolaryngology. Am J Otolaryngol 43(5):103598. https://doi.org/10.1016/j.amjoto.2022.103598
A
Davis R, Smith T (2018) Utilization of process mining in discrete manufacturing processes (Master’s thesis). Tampere University. https://trepo.tuni.fi/bitstream/10024/226273/2/KemppiLauri.pdf
A
Gremyr A, Andersson Gäre B, Greenhalgh T, Malm U, Thor J, Andersson A-C (2020) Using complexity assessment to inform the development and deployment of a digital dashboard for schizophrenia care: Case study. J Med Internet Res 22(4):e15521. https://doi.org/10.2196/15521
A
Tang UH, Thor J (2024) Exploring the role of complexity in health care technology implementation: A case study. JMIR Hum Factors 11(1):e50889. https://doi.org/10.2196/5088
A
Yi S, Burke C, Reilly A, Straube S, Graterol J, Peabody CR (2023) Designing and developing a digital equity dashboard for the emergency department. JACEP Open 4(4):e12997. https://doi.org/10.1002/emp2.12997
A
Wang J, Wong KLM, Olubodun T, Cleland J (2024) Developing policy-ready digital dashboards of geospatial access to emergency obstetric care: A survey of policymakers and researchers in sub-Saharan Africa. Health Technol 14:69–80. https://doi.org/10.1007/s12553-023-00793
A
Dowding D, Merrill J, Onorato N, Barrón Y, Rosati RJ, Russell D (2018) Impact of home care nurses’ numeracy and graph literacy on comprehension of visual display information: Implications for dashboard design. J Am Med Inform Assoc 25(10):1363–1370. https://doi.org/10.1093/jamia/ocx042
A
Kim J, Lee S (2020) The role of dashboards in business decision making and performance management. (White paper / working paper). https://www.researchgate.net/publication/353307344
A
Liu Y, Zhang Y (2020) Designing manufacturing dashboards on the basis of a comprehensive survey. Procedia CIRP 81:340–345. https://doi.org/10.1016/j.procir.2019.03.063
A
Nguyen T, Tran P (2020) The role of dashboards in business decision making and performance management. (White paper / working paper). https://www.researchgate.net/publication/353307344
Reinking J (2020) Synthesizing enterprise data to strategically align performance measures. Int J Acc Inform Syst 36:1–14. https://doi.org/10.1016/j.accinf.2019.100444
A
Piquet BUC (2023) Dashboard design for key performance indicators visualization of STEAM government initiatives: A case study. In Proceedings of the XI Latin American Conference on Human-Computer Interaction. https://doi.org/10.1145/3630970.363104
A
Skosana S, Mlambo Sibusiso and Madiope, Thabang and Thango, Bonginkosi, Evaluating Wireless Network Technologies (3G, 4G, 5G) and Their Infrastructure: A Systematic Review (October 18, 2024). http://dx.doi.org/10.2139/ssrn.4992432
A
Dumitriu B, Dumitriu A, Socol F, Socol I, Gluhovschi A (2025) Contraceptive Barriers and Psychological Well-Being After Repeat Induced Abortion: A Systematic Review. Behav Sci 15(10):1363. https://doi.org/10.3390/bs15101363
A
Conte L, Tumolo M, De Nunzio G, De Giorgi U, Guarino R, Cascio D, Cucci F (2025) The Prognostic Power of miR-21 in Breast Cancer: A Systematic Review and Meta-Analysis. Int J Mol Sci 26(19):9713. https://doi.org/10.3390/ijms26199713
https://www.mdpi.com/1422-0067/26/19/9713
A
Khanyi M, Xaba Sfundo and Mlotshwa, Nokunqoba and Thango, Bonginkosi and Lerato, Matshaka, The Role of Data Networks and APIs in Enhancing Operational Efficiency in SME: A Systematic Review (October 11, 2024). http://dx.doi.org/10.2139/ssrn.4984455
A
Luo H, Su H, Tang Q, Nisa F, He L, Zhang T, Liu X, Liu Z (2025) Review of Research Advances in Gyroscopes’ Structural Forms and Processing Technologies Viewed from Performance Indices. Sensors 25(19):6193. https://doi.org/10.3390/s25196193
A
Parvizi P, Amidi A, Zangeneh M, Riba J, Jalilian MA Taxonomy of Robust Control Techniques for Hybrid AC/DC Microgrids: Rev Eng 2025, 6(10), 267; https://doi.org/10.3390/eng6100267
A
Li W, Sadeh O, Chakraborty J, Yang E, Basu P, Kumar P Multifaceted Antibiotic Resistance in Diabetic Foot Infections: A Systematic Review. Microorganisms 2025, 13(10), 2311; https://doi.org/10.3390/microorganisms13102311
https://www.mdpi.com/(2076) -2607/13/10/2311
A
Mtjilibe T, Rameetse E, Mgwenya (2024) Nkosinathi and Thango, Bonginkosi, Exploring the Challenges and Opportunities of Social Media for Organizational Engagement in SMEs: A Comprehensive Systematic Review06, (July http://dx.doi.org/10.2139/ssrn.4998542
A
Kurniawan F, Agustian H, Dermawan D, Nurdin R, Ahmadi N, Dinaryanto O (2025) Hybrid Rule-Based and Reinforcement Learning for Urban Signal Control in Developing Cities: A Systematic Literature Review and Practice Recommendations for Indonesia. Appl Sci 15(19):10761. https://doi.org/10.3390/app151910761
A
Agapiou A (2025) Unequal Horizons: Global North–South Disparities in Archaeological Earth Observation (2000–2025). Remote Sens 17(19):3371. https://doi.org/10.3390/rs17193371
A
Sechele G, Rabedzwa G, Nongayo S, Thango B (2024) Systematic Review on SEO and Digital Marketing Strategies for Enhancing Retail SMEs' Performance. 10.20944/preprints202410.1715.v1
A
Katsandres S, Scheele S, Kiener T, Bloudek L (2025) Public Health Impact of the MVA-BN Vaccine During the 2022 Mpox Outbreak: A Systematic Review. Infect Dis Rep 17(5):124. https://doi.org/10.3390/idr17050124
A
Rodrigues P, Neto A; Alonso do Espírito, Santo L, Tribess S, Virtuoso Junior J (2025) Walking Football as a Multidimensional Intervention for Healthy Aging: A Scoping Review of Physical and Functional Outcomes in Older Adults. Int. J. Environ. Res. Public Health 22(10), 1533; https://doi.org/10.3390/ijerph22101533
A
Vesković J, Onjia A Exposure and Toxicity Factors in Health Risk Assessment of Heavy Metal(loid)s in Water. Water 2025, 17(19), 2901; https://doi.org/10.3390/w17192901
A
Dini P, Saponara S, Chakraborty S, Hegazy O, Modeling (2025) Control and Monitoring of Automotive Electric Drives. Electronics 14(19):3950. https://doi.org/10.3390/electronics14193950
https://www.mdpi.com/2079-9292/14/19/3950
A
Muraba J, Mamogobo MK, Thango Bonginkosi, The Balanced Scorecard Methodology: Performance Metrics and Strategy Execution in SMEs: A Systematic Review (October 21, 2024). Available at SSRN: http://dx.doi.org/10.2139/ssrn.4996929
A
Cimarelli C, Millan-Romera J, Voos H, Sanchez-Lopez J, Hardware Algorithms, and Applications of the Neuromorphic Vision Sensor: A Review. Sens 2025, 25(19), 6208; https://doi.org/10.3390/s25196208
A
Radočaj D, Radočaj P, Plaščak I, Jurišić M (2025) Evolution of Deep Learning Approaches in UAV-Based Crop Leaf Disease Detection: A Web of Science Review. Appl Sci 15(19):10778. https://doi.org/10.3390/app151910778
Total words in MS: 7445
Total words in Title: 9
Total words in Abstract: 344
Total Keyword count: 6
Total Images in MS: 20
Total Tables in MS: 4
Total Reference count: 44