The founders of Cavisson have extraordinary expertise in mission-critical enterprise applications that have gone on to become massive successes within the industry. Having worked closely with end users, it became apparent to us that customer experience is paramount to the success of any enterprise application. In addition, we have determined that the basic ingredients of delivering an exceptional customer experience are: quality, performance, reliability, and availability. We also understand the limitations of the traditional siloed approach, which utilizes individual products for quality, performance, chaos testing, and service virtualization, as well as different aspects of observability. Due to a lack of coordination and cohesion among these siloed tools, the traditional approach is grossly under-equipped to ensure an exceptional end-customer experience.
For instance, a siloed approach to quality engineering results in a lack of functional testing automation. While there has been some progress on this front in terms of test execution, the test case and script-creation processes remain fully manual. Furthermore, enterprises end up creating thousands of test cases over time for a particular software module, but the process for determining which tests should optimally be executed in case some “change/fix” is made is still entirely manual. One can execute all the tests, but at times, doing so may be resource/time intensive. Meanwhile, a partial selection of test cases may be dangerous.
Cavisson’s integrated and AI-assisted solution addresses this issue by automatically:
- Creating test cases by learning new usage patterns from real end-user actions in production
- Identifying “newly discovered” test cases
- Creating test scripts and intelligently tagging them with the functionality and software module down to individual lines of code
- Identifying the test cases to be executed based on software changes using the above intelligence
- Monitoring both the functionality and code-level execution during testing by integrating end-to-end observability and helping identify the causes of failed test cases
As this illustration depicts, the entire quality assurance and observability ecosystem becomes highly integrated and automated, taking its effectiveness to a whole new level. This would simply be impossible if the usage patterns discovery, code-level observability, test script creation, and test execution were not deeply interconnected and automated. That’s the reason a completely new ground-up approach was needed to solve this multi-dimensional challenge.
This was just one example. In yet another scenario, the observability and resiliency components of our solution work in tandem to performance testing to help recreate real-life scenarios in pre-production environments. They predict potential production performance and resiliency issues, as well as determine the root causes behind such issues. This revolutionizes the entire customer experience management space.
Innovation has been the driving force behind our product’s development and evolution over time from the very begin- ning. We started by creating performance-testing and service virtualization products to target the fundamental gaps we observed in the performance-testing space. It is more of a rule than an exception that certain issues are seen only in production, despite performance tests being executed successfully. To further complicate matters, if a certain “fix” is planned to resolve a production issue, a basic question arises on how to verify its efficacy if that production issue is not reproducible in a pre-production environment. To tackle this, we built a proprietary solution, InternetTrue™, which serves as the building block of our performance-testing product’s core. InternetTrue™ is a cross-discipline simulation involving networks, operating systems, and database advances to help simulate real-life traffic patterns, load models, concurrency, extreme scalability, and much more, resulting in the high-fidelity recreation of production scenarios in a lab environment.
After developing InternetTrue™, we recognized that once issues surface, it is critically important to identify the root causes of those issues in order to identify possible actionable inputs and improve upon the efficacy of applications. This is what elevates simple performance testing to the level of performance engineering.
We combine the latest advances in High-Performance Computing and Data Science with proprietary algorithms to provide magnitude times proficient and unparalleled analytical capabilities.
To achieve this, we incorporated several innovative concepts to our processes and added various aspects of application observability, including infrastructure, applications, databases, logs, and user-experience monitoring with data enrichment and extreme proficiency at its core. The core of this innovation is KeyData™, which combines several proprietary algorithms and procedures in big data, high-performance computing, and machine learning to collect, manage, and infer from real-time data. This solution has helped identify and resolve issues, which had previously been unaddressed by legacy systems at several Fortune 100 enterprises.
We further expanded our observability solution by adding synthetic monitoring and the voice of customer capabilities with deeply interconnected co-relation parameters to provide a novel end-to-end observability platform for enterprise production environments aimed at drastically reducing the mean time to detect and restore (MTTD & MTTR), via our built-in auto-remediation capabilities.
Next, we added chaos engineering capabilities to support the most innovative and varied chaos experiments. However, the real power came by deeply integrating these facets with performance testing and observability. We’ve also recently added an AI-assisted, automated testing and validation module aimed at providing a highly effective and versatile experience management ecosystem.
For us, it was not about recreating or even improving upon an existing product in this space. Our goal instead was to create a solution from the ground up to meet the challenges faced by enterprises in an increasingly distributed and complex software landscape. We’ve built an extremely scalable, enriched data collection platform with state-of-the-art proprietary algorithms and continuously trained AI models that legacy solutions simply cannot provide.
It has taken several thousand engineering person-years to create our current experience management platform to provide protection to our customers’ mission-critical enterprise applications.
With some of the brightest minds in the application performance space and by leveraging cutting-edge research and advances in data science and computing, we have created a highly innovative solution to provide protection to our customer’s mission-critical enterprise applications. Our technology not only ties together all the aspects of performance engineering but is also highly optimized to consume minimal resources while providing unmatched insights. For example:
- One of the most used constructs in decision-making is calculating percentiles, which is possible since our procedure is at least 100x faster and more proficient than any known
- Another common construct is to identify matching patterns while searching for the root cause of In a typical scenario, this process results in matching across millions of metrics. Our highly proficient algorithms correlate 20 million metrics/sec on a single core to draw inferences. Meanwhile, current solutions end up either limiting such an automated function or leaving it to the users to achieve the task manually, leaving the majority of issues unresolved.
We see our founding principles behind the experience management platform getting validated with Gartner suggesting Digital Immune Systems as a top trend of 2023.
- Finding co-related metrics may help narrow down millions of metrics to tens of metrics, thus helping engineers apply their system knowledge to determine the root However, let’s consider a scenario in which it is observed that response time, database write counts, and CPU utilization are co-related. Such a correlation does not suggest any causal relationship among these metrics. Our product’s AI-assisted algorithms provide a causality relationship via directed graphs, helping identify the “real root cause.”
Gartner recently introduced the Digital Immune System as one of the top technology trends of 2023. Furthermore, Gartner outlines quality, performance, chaos engineering, and availability as essential components of the digital ecosystem of a healthy enterprise, which effectively validates our founding beliefs. While we’ve referred to this as “experience management” from our beginnings as a company as opposed to “digital immunity”, it suffices to say that we are ready to provide a working solution that covers all these aspects and ensures the success of the enterprise applications.
https://www.gartner.com/en/articles/what-is-a-digital-immune-system-and-why-does-it-matter