We support our enterprise clients during their digital transformation and help them to work out their digital strategy. They can build on our expertise and experience with the best practice solutions available. We have implemented various types of projects, including container platform implementation, technical support, and competency development.
We support our startup clients during their scaling and let them focus on fast and efficient functional development and market acquisition. They can build on our expertise and experience with the best practice solutions available. We are involved in various types of activities, including container platform implementation, technical support, and competency development.
We support our integrator clients to meet the needs of their clientele and help them to implement their complex projects. They can build on our expertise and experience with the best practice solutions available. We have performed various types of tasks as a subcontractor, including container platform implementation, technical support, and competency development.
Origoss Solutions team has helped us so many times already, I stopped counting. The morale of this company is exactly what we were looking for. Hard working and passionate people that deliver solutions. Can't get much better than that. Even for us who don't fancy outsourcing but in this case it was the absolute right choice.
We are developing a public area operations support system that performs a huge amount of GIS operations and receives plenty of photos from mobile applications. Origoss Solutions team has supported us tremendously from the beginning. They saved us a lot of time so we could focus on our core business and sales.
In our industry it is vital to stay on top of the latest technological trends and to constantly develop our competencies. Over the years Origoss Solutions has become a trusted partner to deliver training services. They are a truly dedicated team with undisputed skills in the container management domain. We like working with them.
Head of PMO, Digital Automation at Nokia
Building the largest OpenShift platform in the CEE region
Our task was to develop operations support functions for the platform, such as logging, monitoring, and backup. Our client already operated a number of legacy systems and had a complex operating organization, so adapting to the existing operating environment was important. The goal of the system implemented was to provide a platform for supporting the development of applications for public institutions, which is why security was vital here.
Activities performed on the platform must be auditable, keeping the records for a longer period of time, and the platform must also provide high availability, so we had to meet a number of special requirements when designing logging, monitoring, and backup functionalities. We also implemented operations support functionalities for the applications running on the platform.
Migrating a complex Tectonic environment to AKS
We helped the customer to migrate their on-premises cloud platform solution to the public cloud. The system to migrate was based on an outdated version of Tectonic that hosted development, test, and production environments. The old platform was manually deployed and managed; as such, it required a lot of engineering resources. The target environment was Azure Kubernetes Service (AKS).
After assessing the customer infrastructure and identifying the requirements, we came up with an architectural suggestion. We designed the deployment and management workflows based on the current GitOps and infrastructure as code (IaC) principles. We delivered a working implementation that served as the groundwork for a later continuous integration/continuous deployment (CI/CD) integration.
Migrating a complex managed OKD environment to EKS
We were originally asked to help in developing a staging environment on top of a managed Openshift Kubernetes Distribution (OKD) platform that also hosted the existing production environment. It was during our project that the customer decided to change the target platform to Amazon Elastic Kubernetes Service (EKS). While working with the customer, we identified several current operational issues impacting the production environment. We proposed several changes that were implemented later on inside Amazon EKS.
We have continuously been giving advice to our client on the operational aspects, in both the staging and production environments. We implement the proposed solutions and hand them over to the customer’s DevOps team. We have been supporting the customer during and after operational incidents, helping with root-cause analysis. The solutions designed and implemented by us use open-source tools and follow the GitOps and infrastructure as code (IaC) principles.
Migrating a complex VM environment to GKE
In this case, we were helping the customer to migrate their virtual machine-based environment to the public cloud and to build a reliable, convenient development environment with a modern tool set. They had already been using Google Cloud Services, so it was natural to use this platform in the future as well. We also planned a complete solution for logging and monitoring (Loki, Prometheus, Grafana) to help the client discover possible bottlenecks and problems.
After containerizing the applications, we built a testing environment and implemented the continuous integration/continuous deployment (CI/CD) integration. We also created the development environment based on Skaffold and Kustomize, and helped with thorough testing of the entire system. We identified some flaws and had to completely rebuild one of the components from scratch to make it more future-proof and cloud compatible. After we made a step-by-step migration plan together with the customer, we successfully moved their entire domain to its new home (GKE). We have been giving support ever since, continuously improving and fine-tuning based on their requests.
Supporting a custom Kubernetes environment
The tasks in this case consisted of integrating multiple operations support solutions into an existing, frequently changing on-premises development Kubernetes platform. The overall goal was to make the platform stable, and also to make the processes and resources transparent for the client. As a result, operations support services were mainly related to logging, monitoring, and alerting, to help the client get a better understanding of what is happening to their applications inside the cluster, and to identify failing components and bottlenecks.
We integrated the most popular available monitoring and logging solutions (Prometheus, Grafana, ELK stack), and we also found and fixed faulty configurations of some of the existing components. One of the major tasks was the implementation of an API gateway from scratch to make communication possible between different alerting systems. We also helped to fix problems that occurred when configuring persistent storage behind the clusters (MinIO). We have also been providing support for these systems since the beginning of the project.
Deploying a customized IBM FileNet Content Manager to AKS
In this assignment, we helped our client from Southeast Asia to build a containerized platform in AKS for one of their customers. The solution is based on IBM’s FileNet Content Manager Operator, and employs infrastructure as code (IaC) principles.
Our tasks included designing the Azure components to build on, testing and implementing the deployment process, providing workarounds for the inefficiencies of the cloud environment, and assisting our customer in getting support from the software vendor.
Developing our own training curriculum for the Prometheus system
Using our experience of implementing different kinds of monitoring systems, we developed our own training material for cloud native monitoring technologies, based primarily on the Prometheus metrics collection ecosystem.
We provide the material both as classroom-based training for parties interested in this technology in general, and as a workshop-based consultation focusing on our customer’s particular systems and problems.
Training hundreds of students on Kubernetes
During the last two years, we have trained hundreds of students from several big enterprises using our professional courses focusing on Kubernetes and related technologies. These training sessions are mostly hands-on, because we know that the best way to learn is through practice.
Our professional approach is to provide courses about the technologies we use. All of our experts have the needed certifications and practice in teaching, and we are eager to spread this knowledge through these training courses. We reflect technology upgrades in our teaching material, so you can always find the most up-to-date training content in our portfolio.
Customizing the AKS native logging service
When we were approached by the customer, the application had already been deployed and operated on top of Azure Kubernetes Service (AKS). The custom application-level logging mechanism did not apply cloud native best practices. The standard AKS logging mechanism could not cope with the situation and we were asked to rectify the situation.
After experimenting with several approaches, we delivered one that met the requirements of the customer’s DevOps organization. We came up with a solution that works well together with the standard AKS logging mechanism, and as such it is easy to maintain.
Optimizing the platform to manage costs
Our customer originally built an online transaction processing (OLTP) auction system for mobile ad serving, running in Amazon AWS, based on ECS. The solution provided by the application inherently meant a large amount of network traffic (lots of small transactions). After some substantial business growth, they decided to optimize the cloud provider costs and find a hybrid solution. They also decided to move the most network-intensive parts to on-premises, because there was a huge difference in the network costs of the different providers. Because the application was originally architected to use different kinds of AWS services, this also meant making changes so that it could run in a Kubernetes environment.
Because of the contract terms, there was a huge financial incentive to finish the move in time, and because the customer had a limited amount of cloud engineer resources available, we were involved in the design and implementation of a hybrid solution that included a self-maintained on-premises Kubernetes cluster. The nature of the application meant that very strict response times had to be ensured, so part of the project was to help the customer to select a container network interface (CNI) solution that could fulfil their requirements.