3.1 Study Selection (flow of studies)
The flow chart in Fig. 7 demonstrates the procedure of study selection included in this systematic review on Digital Dashboards through design principles and Impact on Business Performance. Technical literature on Digital Dashboards through design principles and Impact on Business Performance was explored across various academic databases using the keywords given in the "Search Strategy" section. Articles were screened against the inclusion and exclusion criteria mentioned in the "Eligibility Criteria" section. The original search of all the databases selected produced some 5 793 research studies, which were initially screened for titles and abstracts. As described in Fig. 8, a total of 75 studies were ultimately found to be relevant to the review's focus on Digital Dashboards through design principles and Impact on Business Performance: 26.25% were from Google Scholar, 42.84% from Scopus and 30.91% from Web of Science. The material composition was of book chapters for 15.37%, conference paper for 23.58%, dissertation for 1.30%, thesis for 5.58% and journal articles for 54.18% - see Fig. 10. After the disposal of duplicate entries, there were 75 unique studies procured and assessed using full-text analysis. All 75 studies met the specified eligibility criteria and thus made it into the final review. There was no exclusion from studies during the full-text screening to include comprehensive representation of literature of Digital Dashboards through design principles and Impact on Business Performance.
The temporal analysis of publication trends (Fig. 8. Publication Year Distribution) reveals a clear four-phase developmental trajectory in dashboard research. The Early Period (2016–2018) represents the formative stage, accounting for 18.7% of publications, where foundational design concepts and exploratory implementations emerged. This was followed by a pronounced Growth Period (2019–2021) contributing 41.3% of studies, coinciding with the rise of cloud computing, real-time analytics, and broader enterprise adoption of dashboards. The Maturity Period (2022–2023) reflects 24.0% of publications, demonstrating a consolidation of theoretical frameworks, usability refinement, and cross-domain integration. Finally, the Emerging Peak (2024–2025) marks a renewed surge of scholarly output, emphasizing artificial intelligence, automation, and predictive capabilities in dashboards. Collectively, these trends indicate an evolution from experimentation to sophistication, underscoring an accelerating momentum in dashboard-related research that aligns with advancements in data-driven decision-making and digital transformation.
The geographical distribution of reviewed papers (Fig. 9. Distribution of reviewed papers by country and region) demonstrates a pronounced concentration of research activity in North America, Europe, and the Asia–Pacific regions, reflecting the global but uneven diffusion of dashboard-related studies. The United States (17.3%) and India (10.7%) emerge as the leading contributors, underscoring both advanced digital ecosystems and expanding research capacity in developing economies. European countries collectively including Germany, the UK, and Portugal account for a substantial share, reflecting long-standing engagement with performance management and visualization research traditions. Meanwhile, contributions from Africa (notably Ghana and Nigeria) and Latin America (Ecuador and Mexico), though modest, indicate growing participation from emerging regions. This regional pattern highlights a globalizing but still asymmetric research landscape, where high-income economies dominate methodological innovation, while developing regions increasingly explore contextual and applied implementations of dashboard systems in education, healthcare, and governance.
The classification of publication types (Fig. 10. Publication Type Distribution) reveals that journal articles (60%) constitute the majority of the reviewed works, reflecting the field’s increasing academic maturity and its focus on rigorous, peer-reviewed dissemination. Conference papers (18.7%) represent the second-largest category, indicating an active community of scholars presenting preliminary findings and methodological innovations at academic and professional gatherings. Book Chaps. (12%) highlight the field’s growing integration within edited academic volumes, often focusing on applied frameworks and cross-disciplinary case studies. Lastly, theses and dissertations (8.3%) contribute a smaller yet important portion, signaling ongoing engagement by emerging researchers and the academic consolidation of dashboard research as a standalone topic. Collectively, this distribution underscores a transition from exploratory conference outputs toward stable, peer-reviewed journal research, evidencing both scholarly maturation and the establishment of dashboards as a recognized area within information systems and data visualization research.
The distribution of indexing sources (Fig. 11. Indexing Source Distribution) indicates a balanced yet stratified publication landscape across different academic databases. Google Scholar accounts for the largest share (34.7%), reflecting its broad inclusivity and capacity to capture a diverse range of materials, including gray literature and non-indexed outputs. Scopus follows closely with 33.3%, representing publications within indexed scholarly databases that emphasize methodological rigor and international visibility. Web of Science, comprising 32%, denotes high-impact and peer-reviewed outlets, underscoring the presence of mature, well-cited studies within the field. The near parity among these three sources suggests that dashboard research enjoys wide academic coverage across both open-access and high-impact domains, indicating sustained scholarly interest and increasing legitimacy within the broader data analytics and information systems research community.
The thematic classification of dashboard applications (Fig. 12. Dashboard Application Focus) illustrates a clear dominance of strategic (38.7%) and operational (32%) dashboards, signifying their centrality in supporting high-level decision-making and process management across organizations. Analytical dashboards (14.7%) occupy a smaller yet important segment, focusing on diagnostic insights and data exploration to guide performance improvements. A minority of studies (13.3%) were mixed or not specified, often describing hybrid frameworks that span multiple organizational levels. Finally, a small subset (1.3%) focused on iterative design and KPI visualization, representing highly specialized research concerned with visualization refinement and usability testing. Collectively, these distributions highlight a multi-layered research orientation with dashboards evolving from purely operational monitoring tools to strategic intelligence systems that integrate analytics, user interaction, and adaptive visualization for data-driven management.
The analysis of design principles (Fig. 13. Dashboard Design Principles) reveals that most studies emphasize KPI selection (41.3%) and visual hierarchy (41.3%) as the dominant design foundations in dashboard development. These two principles collectively define how dashboards convey meaning ensuring that critical indicators are prioritized and that visual organization supports cognitive clarity and decision accuracy. Simplicity (10.7%) appears as a recurring theme across user-centered studies, aligning with minimalist design trends aimed at reducing cognitive load and improving usability. Only a small fraction (6.7%) of studies left their design principles unspecified, suggesting growing methodological transparency in dashboard research. Overall, the evidence highlights a strong adherence to core design and visualization principles, emphasizing that effective dashboards rely not only on data integration but also on strategic presentation and perceptual optimization balancing analytical depth with visual clarity for improved user engagement and decision-making performance.
The classification of user groups (Fig. 14. Dashboard User Groups) demonstrates that executives (49.3%) constitute the primary target audience of dashboard implementations, underscoring their central role in strategic monitoring and high-level decision-making. Field teams (22.7%) represent the second-largest user segment, reflecting the growing adoption of dashboards in operational and real-time contexts, particularly in logistics, healthcare, and maintenance environments. Analysts (20%) form a specialized category, employing dashboards primarily for data diagnostics and performance evaluation, often integrating advanced visualization and exploratory tools. A smaller proportion of studies (8%) did not specify a user group, typically focusing on technical or design aspects rather than applied contexts. Collectively, this distribution highlights the multi-tiered nature of dashboard usage, bridging strategic oversight, operational execution, and analytical interpretation each requiring distinct levels of data abstraction, visualization complexity, and interactivity.
The distribution of evaluation metrics (Fig. 15. Dashboard Evaluation Metrics) indicates a predominant emphasis on usability (65.3%), confirming that most dashboard research prioritizes user experience and interface effectiveness as central evaluation criteria. These studies typically assess satisfaction, ease of navigation, and perceived usefulness, aligning with human–computer interaction (HCI) frameworks and user-centered design methodologies. In contrast, task completion time (34.7%) represents a more objective, performance-oriented metric, focusing on operational efficiency and quantifiable productivity gains. The dual prominence of these two measures reflects a balanced approach between perceptual quality and functional efficiency, where dashboards are expected not only to perform well but also to facilitate intuitive, error-free user interaction. Overall, the dominance of usability evaluation underscores a shift toward experience-driven dashboard design, emphasizing human factors as essential determinants of adoption and decision-making effectiveness.
The analysis of data update modes (Fig. 16. Dashboard Data Update Mode) reveals that periodic updates (64%) are more prevalent than real-time updates (36%), reflecting the ongoing balance between data stability and immediacy in dashboard systems. Periodic or batch updates are typically used in strategic and managerial dashboards, where aggregated insights are sufficient for long-term decision-making and performance tracking. In contrast, real-time dashboards though fewer are increasingly applied in operational and monitoring contexts, such as healthcare, logistics, and manufacturing, where instant feedback and situational awareness are critical. The distribution suggests that while real-time integration is technologically feasible, its broader adoption remains limited by factors such as data latency, infrastructure cost, and synchronization complexity. Overall, this pattern highlights a gradual evolution toward live, adaptive dashboards, driven by the demand for responsiveness and the growing maturity of IoT and cloud-based data systems.
The distribution of deployment platforms (Fig. 17. Dashboard Deployment Platforms) demonstrates that web-based dashboards (53.3%) overwhelmingly dominate current implementations, emphasizing the shift toward cloud-based and online analytical environments. These platforms facilitate accessibility, scalability, and integration with diverse data sources, aligning with the growing reliance on SaaS and BI cloud ecosystems. Mobile dashboards (18.7%) represent a significant secondary trend, reflecting the increasing demand for real-time, on-the-go decision support among field teams and executives. Desktop-based dashboards (4%) appear to be declining, likely due to limited flexibility and the maintenance burden of standalone systems. Meanwhile, 24% of studies did not specify the platform, often focusing on conceptual design or performance metrics rather than implementation details. Overall, the evidence highlights a paradigm shift toward web and mobile ecosystems, supporting ubiquitous access, collaborative analytics, and cross-device consistency in dashboard usage.
The distribution of performance outcomes (Fig. 18. Dashboard Performance Outcomes) reveals that decision speed (58.7%) is the most frequently measured impact, highlighting dashboards’ crucial role in enhancing cognitive and decision-making efficiency. Studies focusing on decision speed often emphasize the value of real-time data visualization, streamlined layouts, and automated insights, which collectively reduce information processing time for managers and analysts. Productivity (40%) forms the second major outcome, representing improvements in organizational and operational performance through better monitoring, coordination, and resource allocation. A small subset of studies (1.3%) addressed mobile performance impact, underscoring growing interest in mobility and accessibility as performance enablers in distributed work environments. Collectively, these findings underscore that dashboards contribute most directly to strategic responsiveness and operational efficiency, with evidence converging on their ability to accelerate informed decision-making and optimize performance outcomes across hierarchical levels.
The analysis of reported challenges (Fig. 19. Reported Dashboard Challenges) identifies two dominant issues shaping dashboard implementation and effectiveness. User overload (46.7%) emerges as the most prevalent concern, representing a cognitive and usability challenge linked to excessive information density, poor visual hierarchy, and lack of contextual filtering. This suggests that while dashboards enhance access to data, they can paradoxically hinder decision-making when complexity outweighs clarity. The second major challenge, data delays (44%), reflects persistent technical and systemic limitations in ensuring real-time accuracy and seamless data integration across multiple sources. These delays undermine trust and responsiveness two essential attributes of effective dashboard systems. A smaller portion (9.3%) of studies did not specify specific challenges, often focusing instead on conceptual design or framework validation. Collectively, the findings emphasize that usability optimization and system integration remain central barriers to realizing dashboards’ full strategic potential, calling for continued refinement in both design ergonomics and data infrastructure reliability.
The synthesis of future research directions (Fig. 20. Future Research Directions) reveals a diverse yet convergent roadmap for advancing dashboard science and practice. The most prominent theme, AI and automation in dashboards (13.3%), signals an emerging focus on intelligent, adaptive systems capable of predictive analytics, automated insights, and natural language interaction. This is closely followed by broader application and scalability (12.7%) and integration and interoperability (9.3%), highlighting the demand for cross-domain adaptability and seamless data connectivity across enterprise systems.
Themes such as visualization and design evolution (7%) and validation and empirical testing (6.6%) underscore the continued emphasis on refining user-centered design frameworks and establishing stronger evidence-based evaluation models. Meanwhile, strategic and managerial expansion (8%) and emerging technologies and extended environments (8%) reflect growing interest in dashboards that support complex organizational ecosystems, including IoT and AR/VR interfaces.
Less frequent yet crucial directions such as policy, governance, and sustainability (6.6%) indicate a shift toward ethical, secure, and socially responsible dashboard deployment. Collectively, these patterns suggest that the next stage of research will prioritize intelligent automation, interoperability, and contextual design, bridging the gap between technical innovation and strategic decision-making efficacy.