iStock1302286440

Engineering

From (Low-)Code to Production

FEBRUARY 6, 2023

Typically software is developed on your localhost before you eventually find a way to bring it to production. For some, this is just a matter of seconds. For others, this might take days or even months.

In this article, we discuss the different deployment models you can use - for your code and some more abstract artifacts like models.

Building Software

Let's assume you have been building a software solution using any type of code which needs to be compiled. In this case, you develop locally, potentially with Test Driven Development (TDD), and compile the code followed by an automatic execution of the tests. Once it is considered ready, you can proceed to the next stage.

But before that, a simple question: Did you write the code alone or with collaborative programming/pair programming? In case you have written it alone, you might want to get it reviewed first.

To do so, you commit your code to version control and push it to a version control system. Tools you can use here are, for example, GIT-based platforms like GitHub, GitLab, or Bitbucket. Those tools also support you with doing code reviews and can trigger a build you have configured. Additionally, they might be even able to deploy to a certain environment.

Deploying Software

To deploy the software you might just have a button in your build pipeline which is pushing the software out to the server which you would like to use. However, sometimes that's not possible due to organizational restrictions. In this case, it might even require manual steps. The key here is to avoid as much as possible any manual steps in this process to be fast and consistent with what you do. As soon as you do manual work, you might become a single point of failure and you might reduce the number of times you deploy, making each deployment riskier, as it was a while ago since the last deployment was done.

The software might be deployed to a modern environment like Kubernetes, which can be rolled out through Helm charts to different environments. In case your infrastructure isn't ready for that yet, you can also build an application artifact (e.g., a war file) and deploy this to your enterprise server (e.g. Tomcat, JBoss, etc.). Those steps, too, can be automated, but often they are manual.

Software frameworks, like Spring Boot (used by Flowable), allow you to configure what kind of artifact should be generated. With that, you have the choice of the deployment approach you use. You only need to configure the build process accordingly and set up your automated build pipeline to deliver the artifact to the world.

Deploying Flowable

There are different ways to deploy Flowable, as it provides lots of flexibility in this area. Which one to choose depends on your requirements.

First, you need to decide if customizations to Flowable are required. Don't worry, even if it's an essential decision, you can still change your decision later. In case you decided against customizations, you can simply use the artifacts provided by Flowable as-is. This includes WAR files which can be used as jar files, as well as docker images. In case you decide to do code customizations in Flowable, you need to build your own artifact.

This is additional code that comes on top of the Flowable artifacts and can then be included in either one war/jar file, an extension jar file, or a docker image. To create a basic customization project, you can check out start.flowable.com which will generate a project artifact for you that you can simply build. Creating the docker image out of it isn't hard either, it can be simply based on a Java docker image and start for example the jar file.

The alternative way is to just create a jar that doesn't contain the Flowable artifacts and include this as an additional classpath for the Flowable image. This can be either done when you are using an application server or by extending the Flowable Docker images. If you go this way, ensure you don’t include the Flowable dependencies. This will make it easier when you use the public APIs and upgrade to a newer version - you might not even need to rebuild your project as in that case it is enough to upgrade Flowable.

But you always want to run your tests, to ensure that everything is working as expected.

Developing Models

We have now a way to bring the code we developed to production, but we still haven't created a business process or case yet. This is the next step where you need to decide how you work together. Rather than having now something text-based, we have something more visual-based – case, process, and rule models. The people working on this might be software engineers, but they don't need to be. Those considerations restrict how we create and deliver those models. The first question you need to ask yourself is: How many people are working on the same business application at the same time?

Fewer people mean it will be easier. Changing graphical models in a team is always challenging, because even if the software allowed concurrent editing, typically, you are working for BPMN and CMMN with a diagram that represents executable software. Having another person make changes to the model at the same time might cause unexpected failures during the test execution. In software development, we typically use branches to do this. This doesn't work that great for model-based development, since there is no such thing as a merge functionality of different graphical tools. You can obviously use the text-based representation to do the merge, but this might become challenging for less technical people.

It’s all about coordination and communication. Flowable helps you to do this by providing the possibility of model locking. Only one person at a time can change the model, providing an efficient way to work together. As you also know you are the only person working on the model when testing it. By splitting your business use case into multiple parts, you not only have an abstraction layer, but you are also able to work on different parts of the application at the same time.

The usage of model locking requires a centralized Flowable Design instance. This instance can be used by everyone, and you can then use it to deploy to different environments. But be careful: when doing code and model changes at the same time, you might need to test them with a local design and some unit tests first, as using the online instance might break the model for everybody else. Code first, model next. This is also an interesting point to consider when you do coding: make everything as abstract as possible, but not more abstract as necessary. Ensure that your code components are re-usable within your model.

Deploying the models to Flowable

This part is easy: just press publish and you are done.

Well… in theory this is correct, but in reality, it's only half of the truth. Typically, there’s more than just one environment. Assuming you are working in a development environment, it's totally fair to do this. But eventually, you will move through a Quality Assurance (QA) or User Acceptance Testing (UAT) environment to your production environment. You wouldn't like to press the publish button to deploy to the production environment. Not only because it's hard to control who is doing it, but also because you would like to know what was published to which environment.

To get your artifacts towards production you again have a choice. And this depends mainly on who is building the diagrams. In technical projects, you can go ahead and add those artifacts to your source code base and check them into version control. When you use a local Flowable Design application, you might want to automate this by artifact extraction. When using a central deployed Flowable Design instance, you could, for example, use the REST API's to pull the artifact from Flowable Design. In the end, it means your models reside with the code base. The main advantage is that you can easily test everything together and validate quickly whether the current code is working with the current models. Once that is done you can also deploy everything together since there is the possibility to deploy artifacts from the source code.

This is both – an advantage and a disadvantage. By bundling your code with the models, you can't deploy them independently. Whenever you deploy the code, you also switch to the latest model version. The old model version will obviously still be there since Flowable supports simultaneously running the same model in two different versions, so your code still needs to consider both. However, you might want to deploy a new version of your model also without doing any changes to the code. This is not possible with this approach.

In addition to the file-based deployment, it is also possible to deploy models via the REST API. This gives you the flexibility to deploy them to production whenever you want. You don't need to deploy your source code and you also don't rely on the infrastructure. However, with this approach, you then need to consider how you track what was and when was it deployed to an environment. There are solutions out there, and one way is to create a separate Continuous Integration/Continuous Development (CI/CD) pipeline for this. In the pipeline, you can even automate testing based on models.

Environment Independence

Above we discussed some frameworks you can choose from depending on your needs. We have customers using the different approaches mentioned to deploy from their development environment to production.

However, you also need to ensure that the models are working in other environments. This is often done with environment configuration to specify the different properties for each environment. This can be done in two ways: Either you overwrite your models with configuration properties specific to your environment, or you use generic placeholders and expressions inside your models.

Take into consideration that there is not always a clean separation of what is a model and what is code. An event registry channel model is considered a model, but maybe you also want to consider it as infrastructure, especially if it is reading emails and just exposing them with the default event model to all your processes and cases. The separation is sometimes murky, and as so often it depends on the use case.

Conclusion

This blog post presents a framework on how you can define your way from development to production. None of the ways above is considered the "one solution" or even wrong, it often depends on the use-case what the best approach is to go for.

There are some key factors to consider when implementing it, which apply to all real software projects: Who has what responsibility? How do you test the software? How much can you automate?

With Flowable’s flexible architecture and various ways to deploy to production, it is a matter of choosing or adapting to the right path given all the functional and technical constraints.

Valentin Zickner

Senior Solution Architect

Valentin is a Solution Architect at Flowable. Besides consulting customers on the best implementation of Flowable, he is currently focused on enhancing the developer experience through documentation improvements and video tutorials. 

Share this Blog post
pexels-google-deepmind-18069697
Engineering | FEBRUARY 19, 2024
The Value of AI in Modeling

As AI gains prominence as a pivotal technology and enterprises increasingly seek to leverage its capabilities, we are actively exploring diverse avenues for integrating AI into process automation.

pixabay_egg-583163_1920_stevepb
Engineering | OCTOBER 3, 2023
Low-code, High Impact: Flowable & CMMN for Complex Use Cases

The key to managing complexity is to combine different and multiple tools leads to better, faster, and more maintainable solutions. For example, combining BPMN with CMMN.

AdobeStock_566576699
Engineering | OCTOBER 2, 2023
The New Flowable eLearning Platform

Discover the reasons behind our brand-new Flowable eLearning platform and explore its features by registering for our inaugural free course.