JENOVA: The First AI Operating System
September 24, 2024
The iPhone is one of the most successful products of all time. It is used by over 1.4 billion people worldwide and hosts an App Store ecosystem facilitating over $1 trillion in transactions every year.
When the iPhone was first launched in 2007, Apple did not design or produce any of the core underlying hardware technology. The main application processor and flash memory chips were designed by Samsung, the touch-screen glass was supplied by Balda, the baseband processor was built by Infineon, the Wi-Fi chip was provided by Broadcom, and the list goes on. Using today’s AI technology language, you could say that the original iPhone was just a “wrapper”.
So what made the iPhone unique and standout? Asides from state-of-the-art hardware and Steve Jobs’ magic, the success can primarily be attributed to the iPhone OS, or iOS.
From its inception, iOS was designed with a singular focus on user experience, featuring an intuitive interface that even the least tech-savvy could navigate. The "slide to unlock" feature, the grid of apps, and the capacitive touch interface all worked together to create a delightful experience. This ease of use lowered the barrier to entry for millions of users who might have otherwise been intimidated by the complexity of smartphones.
Beyond its intuitive interface, iOS's brilliance lay in its seamless integration of hardware and software that brought out the best in each component. This integration allowed iOS to optimize performance, maximize battery life, and deliver a smooth, responsive experience that outshone competitors with seemingly superior specifications. All this created a unified platform that enabled users to access cutting-edge technologies and capabilities within a single, cohesive device.
With this powerful unified platform, iOS inadvertently created the perfect environment for crowdsourcing innovation. The introduction of the App Store in 2008 opened the floodgates to a world of third-party creativity. iOS provided developers with a consistent framework of tools and APIs, which allowed them to tap into the iPhone's capabilities and create applications that were previously unimaginable on a mobile device. This move effectively turned millions of developers worldwide into contributors to the iPhone ecosystem.
Fast forward to 2024, the AI landscape bears a striking resemblance to the environment that gave rise to the iPhone. The rapid advances in LLMs, each with growing specialization, have made it increasingly difficult for everyday people to navigate and adapt to. OpenAI’s latest o1 model specializes in math and reasoning, Claude 3.5 Sonnet excels in general intelligence and knowledge, Llama 3.1 405B shines in roleplaying and creativity, and this is without delving into models of other modalities (such as Flux, Kling, Suno), retrieval augmented generation (RAG), or LLM-enabled tools and agents.
Much like the disparate hardware landscape of the time of the original iPhone, these AI models and tools are often operating in silos, not optimally integrated for seamless user experience. The average user faces a daunting task of understanding which AI to use for their needs and how to access them. Without a unifying platform or framework, the full potential of artificial intelligence remains out of reach for many, creating a significant gap between the technology's capabilities and its accessibility to the broader public.
Introducing JENOVA, the first AI operating system that seamlessly integrates the most advanced LLMs, retrieval augmented generation (RAG), along with tools/agents into a unified, user-friendly platform. By bridging the gap between forefront AI capabilities and everyday users, JENOVA aims to transform how we interact with artificial intelligence.
JENOVA is comprised of three core components:
Model routing is the system which, based on the intent and domain of the user's query, dynamically selects the LLM that can provide the most accurate and relevant response. JENOVA's model selection mapping is continuously refined based on authoritative LLM performance benchmarks as well as rigorous internal evaluations, ensuring users are always accessing the best-in-class LLM intelligence.
Retrieval Augmented Generation (RAG) extends the knowledge of an LLM by utilizing vector databases and search technologies to efficiently retrieve relevant information from custom knowledge bases beyond the LLM's initial training data. This enables JENOVA to operate with an effectively unlimited context and to extract pertinent primary source information from vast, easily modifiable datasets.
Tools and agents represent the various capabilities beyond the basic input/output function of an LLM, such as web search or interactions with external systems. JENOVA intelligently determines when and how to deploy these tools, either in response to explicit user requests or proactively when it recognizes an opportunity to enhance the comprehensiveness of its answers or to achieve a user intended action.
On the surface, model router, RAG, and tools/agents seem to be fairly straightforward functions common amongst the vast AI startup landscape, and do not merit being designated as an “operating system”. However, this is only true when analyzing these capabilities in isolation, and not when all the components must be mutually interoperable and scalable.
Let's consider a simple example involving only the model router and tools. The most common method by which tools are invoked by models is through function calling. However, each AI provider implements function calling slightly differently, which can make it challenging to apply the same tool reliably when switching between different providers' models. For instance, OpenAI separates tool calls into distinct parameters with arguments formatted as JSON strings, while Anthropic returns tool calls embedded within a larger content block. These differences in structure and formatting mean that a tool optimized for one provider’s function-calling mechanism might not work seamlessly with another. Consequently, swapping between models from different providers requires adapting tools or modifying function-calling protocols. This not only complicates the process and introduces potential inconsistencies but also hinders the integration and scaling of additional models and tools.
JENOVA addresses this particular challenge by standardizing the tool invocation process across all integrated models, ensuring that tools are reliably and consistently applied, regardless of the underlying model provider or framework. Let's take an example of a simple query process:
A user asks: "I hold Nvidia stocks, how's the company performing?"
JENOVA analyzes the query and determines that up-to-date external information is required. It activates the web search agent to gather recent financial data on Nvidia.
The web search agent performs Google search for Nvidia's latest financial results, then iteratively evaluates the search results and collects data from various sources such as Nvidia's latest financial reports, recent press releases, and analyst reports.
For high-volume data sources, such as the annual financial report (10-K), which can contain up to 80,000+ tokens per file, JENOVA utilizes RAG to retrieve the most relevant information for analyzing Nvidia's performance while minimizing context window usage.
JENOVA then selects the LLM most optimal for financial analysis to answer the user’s query using the context provided by the web search agent and RAG.
Throughout this process, different types of LLMs are used for various tasks. For example, one LLM is used to determine the user intent and invoke the web browsing agent, another LLM orchestrates the Google search and data collection, and then another LLM provides the final answer. Each LLM within this workflow can be plugged and played with no integration effort, enabling JENOVA to quickly and scalably adapt to any new LLMs, regardless of the provider or function call framework. And this represents just one dimension of JENOVA's system capabilities among many.
This interoperability is what elevates JENOVA from a simple aggregation of LLMs, RAG, and tools/agents into a true operating system, seamlessly managing complex multi-system AI interactions. To tie back to the iPhone/iOS analogy, the LLMs function as the processors, executing tasks and driving operations; RAG acts as the memory, storing and retrieving data efficiently; and the tools/agents serve as peripherals, such as cameras, audio, and connectivity components, with JENOVA orchestrating these elements to optimize performance and efficiency.
JENOVA is designed so that it can theoretically integrate and scale unlimited models, knowledge bases, and agents without compromising speed or accuracy, with computational cost being the only variable factor. This extensibility and scalability enables rapid integration and scaling of native and future 3rd-party capabilities unmatched by other platforms. All these technological complexities, however, are hidden behind an elegantly minimalistic interface with which users interact using natural language and no more than ten unique icons.
JENOVA is now used every day by people from over 70 countries worldwide and is built on principles that echo the revolutionary impact of the original iOS:
A singular focus on user experience, building an intuitive interface that even the least tech-savvy could navigate. Lowering the barrier to entry for everyday people who might otherwise be intimidated by the complexity of AI.
A seamless integration of the best AI models and tools, bringing out the best in each component. Enabling users to access cutting-edge AI capabilities all within a single interface.
A platform with consistent frameworks for tools and APIs, leverageable by future third-party developers to easily build and deploy apps on top of the most advanced AI technologies.
If you share this vision, then come build with us — contact@jenova.ai.
JENOVA: The First AI Operating System
September 24, 2024
The iPhone is one of the most successful products of all time. It is used by over 1.4 billion people worldwide and hosts an App Store ecosystem facilitating over $1 trillion in transactions every year.
When the iPhone was first launched in 2007, Apple did not design or produce any of the core underlying hardware technology. The main application processor and flash memory chips were designed by Samsung, the touch-screen glass was supplied by Balda, the baseband processor was built by Infineon, the Wi-Fi chip was provided by Broadcom, and the list goes on. Using today’s AI technology language, you could say that the original iPhone was just a “wrapper”.
So what made the iPhone unique and standout? Asides from state-of-the-art hardware and Steve Jobs’ magic, the success can primarily be attributed to the iPhone OS, or iOS.
From its inception, iOS was designed with a singular focus on user experience, featuring an intuitive interface that even the least tech-savvy could navigate. The "slide to unlock" feature, the grid of apps, and the capacitive touch interface all worked together to create a delightful experience. This ease of use lowered the barrier to entry for millions of users who might have otherwise been intimidated by the complexity of smartphones.
Beyond its intuitive interface, iOS's brilliance lay in its seamless integration of hardware and software that brought out the best in each component. This integration allowed iOS to optimize performance, maximize battery life, and deliver a smooth, responsive experience that outshone competitors with seemingly superior specifications. All this created a unified platform that enabled users to access cutting-edge technologies and capabilities within a single, cohesive device.
With this powerful unified platform, iOS inadvertently created the perfect environment for crowdsourcing innovation. The introduction of the App Store in 2008 opened the floodgates to a world of third-party creativity. iOS provided developers with a consistent framework of tools and APIs, which allowed them to tap into the iPhone's capabilities and create applications that were previously unimaginable on a mobile device. This move effectively turned millions of developers worldwide into contributors to the iPhone ecosystem.
Fast forward to 2024, the AI landscape bears a striking resemblance to the environment that gave rise to the iPhone. The rapid advances in LLMs, each with growing specialization, have made it increasingly difficult for everyday people to navigate and adapt to. OpenAI’s latest o1 model specializes in math and reasoning, Claude 3.5 Sonnet excels in general intelligence and knowledge, Llama 3.1 405B shines in roleplaying and creativity, and this is without delving into models of other modalities (such as Flux, Kling, Suno), retrieval augmented generation (RAG), or LLM-enabled tools and agents.
Much like the disparate hardware landscape of the time of the original iPhone, these AI models and tools are often operating in silos, not optimally integrated for seamless user experience. The average user faces a daunting task of understanding which AI to use for their needs and how to access them. Without a unifying platform or framework, the full potential of artificial intelligence remains out of reach for many, creating a significant gap between the technology's capabilities and its accessibility to the broader public.
Introducing JENOVA, the first AI operating system that seamlessly integrates the most advanced LLMs, retrieval augmented generation (RAG), along with tools/agents into a unified, user-friendly platform. By bridging the gap between forefront AI capabilities and everyday users, JENOVA aims to transform how we interact with artificial intelligence.
JENOVA is comprised of three core components:
Model routing is the system which, based on the intent and domain of the user's query, dynamically selects the LLM that can provide the most accurate and relevant response. JENOVA's model selection mapping is continuously refined based on authoritative LLM performance benchmarks as well as rigorous internal evaluations, ensuring users are always accessing the best-in-class LLM intelligence.
Retrieval Augmented Generation (RAG) extends the knowledge of an LLM by utilizing vector databases and search technologies to efficiently retrieve relevant information from custom knowledge bases beyond the LLM's initial training data. This enables JENOVA to operate with an effectively unlimited context and to extract pertinent primary source information from vast, easily modifiable datasets.
Tools and agents represent the various capabilities beyond the basic input/output function of an LLM, such as web search or interactions with external systems. JENOVA intelligently determines when and how to deploy these tools, either in response to explicit user requests or proactively when it recognizes an opportunity to enhance the comprehensiveness of its answers or to achieve a user intended action.
On the surface, model router, RAG, and tools/agents seem to be fairly straightforward functions common amongst the vast AI startup landscape, and do not merit being designated as an “operating system”. However, this is only true when analyzing these capabilities in isolation, and not when all the components must be mutually interoperable and scalable.
Let's consider a simple example involving only the model router and tools. The most common method by which tools are invoked by models is through function calling. However, each AI provider implements function calling slightly differently, which can make it challenging to apply the same tool reliably when switching between different providers' models. For instance, OpenAI separates tool calls into distinct parameters with arguments formatted as JSON strings, while Anthropic returns tool calls embedded within a larger content block. These differences in structure and formatting mean that a tool optimized for one provider’s function-calling mechanism might not work seamlessly with another. Consequently, swapping between models from different providers requires adapting tools or modifying function-calling protocols. This not only complicates the process and introduces potential inconsistencies but also hinders the integration and scaling of additional models and tools.
JENOVA addresses this particular challenge by standardizing the tool invocation process across all integrated models, ensuring that tools are reliably and consistently applied, regardless of the underlying model provider or framework. Let's take an example of a simple query process:
A user asks: "I hold Nvidia stocks, how's the company performing?"
JENOVA analyzes the query and determines that up-to-date external information is required. It activates the web search agent to gather recent financial data on Nvidia.
The web search agent performs Google search for Nvidia's latest financial results, then iteratively evaluates the search results and collects data from various sources such as Nvidia's latest financial reports, recent press releases, and analyst reports.
For high-volume data sources, such as the annual financial report (10-K), which can contain up to 80,000+ tokens per file, JENOVA utilizes RAG to retrieve the most relevant information for analyzing Nvidia's performance while minimizing context window usage.
JENOVA then selects the LLM most optimal for financial analysis to answer the user’s query using the context provided by the web search agent and RAG.
Throughout this process, different types of LLMs are used for various tasks. For example, one LLM is used to determine the user intent and invoke the web browsing agent, another LLM orchestrates the Google search and data collection, and then another LLM provides the final answer. Each LLM within this workflow can be plugged and played with no integration effort, enabling JENOVA to quickly and scalably adapt to any new LLMs, regardless of the provider or function call framework. And this represents just one dimension of JENOVA's system capabilities among many.
This interoperability is what elevates JENOVA from a simple aggregation of LLMs, RAG, and tools/agents into a true operating system, seamlessly managing complex multi-system AI interactions. To tie back to the iPhone/iOS analogy, the LLMs function as the processors, executing tasks and driving operations; RAG acts as the memory, storing and retrieving data efficiently; and the tools/agents serve as peripherals, such as cameras, audio, and connectivity components, with JENOVA orchestrating these elements to optimize performance and efficiency.
JENOVA is designed so that it can theoretically integrate and scale unlimited models, knowledge bases, and agents without compromising speed or accuracy, with computational cost being the only variable factor. This extensibility and scalability enables rapid integration and scaling of native and future 3rd-party capabilities unmatched by other platforms. All these technological complexities, however, are hidden behind an elegantly minimalistic interface with which users interact using natural language and no more than ten unique icons.
JENOVA is now used every day by people from over 70 countries worldwide and is built on principles that echo the revolutionary impact of the original iOS:
A singular focus on user experience, building an intuitive interface that even the least tech-savvy could navigate. Lowering the barrier to entry for everyday people who might otherwise be intimidated by the complexity of AI.
A seamless integration of the best AI models and tools, bringing out the best in each component. Enabling users to access cutting-edge AI capabilities all within a single interface.
A platform with consistent frameworks for tools and APIs, leverageable by future third-party developers to easily build and deploy apps on top of the most advanced AI technologies.
If you share this vision, then come build with us — contact@jenova.ai.