The Microsoft Visual Studio Setup Project is an old technology to create installer developed by Microsoft. It is out of support from nearly a decade and not present in Visual Studio anymore but when I visit customer sites I find legacy technologies and I need to deal with it on the short-term.
A couple of days ago I was working on an automated CI build on Azure DevOps and we hit an issue when trying to compile an old VDPROJ (migration to Wix in progress, btw ☺). We encountered an HRESULT 8000000A error.
Your team works with a project in Azure DevOps. Your build time starts to increase as the project’s complexity grows but you want your CI build to deliver results as quickly as possible. How can you do that? With parallelism, of course!
The following example shows how to design a build with:
A first “initialization” job.
The proper build jobs: build 1 and build 2 that we want to run in parallel after the step 1.
A final step that we want to execute after that build 1 and build 2 are completed.
We start with configuring the build to look like the following picture:
To orchestrate the jobs as we specified before we use the “Dependencies” feature. For the first job we have no dependencies so leave the field blank.
For the Build 1 job we set the value to Init. This way we’re instructing Azure DevOps to start the Build 1 job only after that Init has completed.
We do the same thing with the Build 2 job.
For the final step we set Build 1 and Build 2 as dependencies so this phase will wait for the 2 previous builds to complete before starting.
Here we can see the build pipeline while it’s executing.
With this brief tutorial we learned how to design a build pipeline with dependencies and parallelism that can reduce the delay of our CI processes. A fast and reliable CI process is always a good practice because we must strive to gather feedback as quickly as possible from our processes and tools. This way we can resolve issues in the early stages of our ALM, keeping the costs down and avoiding problems with customers.
One of the common pitfalls about CI is that the build status is not monitored and not treated as one of the top priorities for the team.
A healty/green status of our CI process means that our code is in a good shape for what our automated tests can tell. Fixing the build status ASAP is easier than leave it red and fix later because the recent changes of the codebase are vivid in the team members’ memory.
In this blog post we’re going to configure a build process in VSTS to enable continuous integration for our ASP.Net Core example web-app. Continuous integration is a powerful technique to prevent merge-hell and improve quality on the “left” stages of our software production process. In the fast-paced world of development we want to merge into the main line of development the new developed features as soon as possibile to avoid open branches that will cause painful merges. If we keep our unit of work small and focused we’ll have great benefits.
In application development, we build a working application from its constituent parts, compiling it in the correct order then linking and packaging it. The ‘build process’ will do everything required to create or update the workspace for the application to run in, and manage all baselining and reporting about the build.
Similarly, the purpose of a database build is to prove that what we have in the version control system – the canonical source – can successfully build a database from scratch. The build process aims to create from its constituent DDL creation scripts and other components, in the version control repository, a working database to a particular version. It will create a new database, create its objects, and load any static, reference or lookup data.
Since we’re creating an entirely new database, a database build will not attempt to retain the existing database or its data. In fact, many build processes will fail, by design, if a database of the same name already exists in the target environment, rather than risk an IFEXISTS...DROPDATABASE command running on the wrong server!
Ma cosa ce ne facciamo di una database build? Se voglio dare un database di sviluppo a una persona nel mio team non mi basta fare un backup e un restore nella sua macchina? Certo ma non è solo questo l’unico scopo di costruire un database da zero in maniera replicabile alla pressione di un pulsante.
Che vantaggi porto a casa?
Gli immediati vantaggi che la compilazione corretta di un db porta sono:
Controllo di salute dello schema del db: tutti gli oggetti sono validi e non fanno riferimenti a cose inesistenti;
Si può (deve?) automatizzare il processo. Così dopo ogni modifica dello schema di cui viene fatto il commit in un VCS parte la compilazione del database e verifica se abbiamo rotto qualcosa;
Automatizzando il processo si possono automatizzare dei test sul database;
Automatizzando i test si comincia ad avere un sistema automatico di controllo qualità del database.
Combinando quindi un database compilabile con un paio di altri strumenti possiamo ottenere quello che per le applicazioni esiste già da decenni, cioè un Database Lifecycle Management strutturato e automatizzato.
Voi avete implementato il DLM per vostro database? Il mio è corso d’opera e farò dei post per spiegare come l’ho ottenuto.