How to get started with Prism in 3 easy steps

In this post we explore the basics of Prism to create a mantainable and scalable WPF application.


How to get started

  1. Download the nuget Prims.Xyz package for your Platform: in this example for WPF (Prims.WPF and Prism.Unity) with Visual Studio;
  2. Create a bootstrapper;
  3. Edit App.xaml and App.xaml.cs files;
  4. (Bonus point) Reorganize our project.

With this procedure we are setting the foundation for our Prism-enabled app.

1. Download

With the Manage Nuget tool in Visual Studio download the following packages: Prims.WPF and Prims.Unity. Visual Studio will take care of the process and at the end we’ll have these packages installed:


2. Create a Bootstrapper

In our project we add a new Class. The name is not important but it has to derive from UnityBootstrapper.

using Microsoft.Practices.Unity;
using Prism.Unity;
using System.Windows;

namespace IC6.Prism
 class Bootstrapper : UnityBootstrapper
   protected override DependencyObject CreateShell()
    return Container.Resolve<MainWindow>();

 protected override void InitializeShell()

3. Edit App.xaml and App.xaml.cs files.

In our project we edit the App.xaml to remove the StartupUri attribute since now that part of the initialization is handled by Prism with Unity.

<Application x:Class="IC6.Prism.App"


In the code behind file (App.xaml.cs) we override the OnStartup method to implement our custom startup logic that leverages the Bootstrapper.

using System.Windows;

namespace IC6.Prism
 public partial class App : Application
   protected override void OnStartup(StartupEventArgs e)

    var bootstrapper = new Bootstrapper();

4. Bonus point

Since we’re using Prism to get the most from MVVM pattern (and other features) we are going also to better organize our project. We delete the MainWindow.xaml and its code behind files from the solution, then we create a new folder called Views and finally create a new Window called MainWindow inside. This is the result.


Back to school time

We’ve started this post with just code and not so much theory about what we are doing.

Why do we need a bootstrapper? A Prism application requires registration and configuration during the application startup process. This is known as bootstrapping the application. The Prism bootstrapping process includes creating and configuring a module catalog, creating a dependency injection container such as Unity, configuring default region adapter for UI composition, creating and initializing the shell view, and initializing modules.

In a traditional Windows Presentation Foundation (WPF) application, a startup Uniform Resource Identifier (URI) is specified in the App.xaml file that launches the main window.
In an application created with the Prism Library, it is the bootstrapper’s responsibility to create the shell or the main window. This is because the shell relies on services, such as the Region Manager, that need to be registered before the shell can be displayed.

Dependency Injection

Applications built with the Prism Library rely on dependency injection provided by a container. The library provides assemblies that work with Unity and it allows us to use other dependency injection containers. Part of the bootstrapping process is to configure. this container and register types with the container.


In this post we introduced Prims and made baby steps. With this approach we’re setting the architecure of our app to be scalable and mantainable. In the next posts we’ll go forward and learn other Prism foundamentals.


Dependency Injection:

Prism GitHub homepage:

Keep it going

It has been nine months since the opening of this blog and it’s time to make a look back. I started this adveture to record some interesting readings, some thoughts and to exercise in writing. I also started because I read all over my Twitter timeline and other developers’ blog that opening a blog and start writing technical things down makes you a better developer. Being a better developer is what makes me tick and so this place was born.


In the very first period I updated my social profiles (LinkedIn and Twitter) and setup the basics of the blog: the name, the graphical theme and the pages where I talk about who I am.

The blog started in Italian because it was meant for my personal exercise. I found that it was true: writing things in a good form (not just tech notes or tech e-mail at work for you or your team) that is in a very minimal way pleasant to read improves your knowledge about a subject. I have to research, to read twice to provide scripts and images, and so on. Now I write in English because I want to practice and because English is the main language in the IT world. Anyway, I won’t stop to write some posts in Italian.

I’ve a schedule that I can live with of 2-3 posts a week. This is the most difficult part. It was easier at the beginning: everything was new but being constant is the hardest thing.


No matter what I have to keep up and go on because I’m receiving positive feedback from this experience that I want to maintain and possibly amplify. My plan for the near future is to became Always better in English technical writing about the subject I like the most: Windows, C# and possibly a bit of Unity 3D engine.




Integrate Azure Cognitive Services in UWP app

Azure Cognitive Services are an amazing tool than enables developers to augment users’ experience using the power of machine-based intelligence. The API set is powerful and provides lots of features that are organized in categories: vision, speech, language, knowledge, search, and labs.image

In this post we learn how to leverage the Emotion API to get our user mood and set a background of our app accordingly.

Get API key

To get starded we need to get an API key to be able to access the Cognitive Services. So, let’s go to this address ( and click on the Create button next to Emotion API.


After that the website will ask to accept the trial term of services and we accept:


After the login we get access to our keys:


Now we’re done with Azure web site and we can start coding.


We fire up Visual Studio and create a new UWP project.


To achieve our goal (get our user mood and change the background) where’re going to develop a simple UI where the user press a button, we take a picture of him/her, send that picture to Azure, and based on the result load a background image.

Before we write code we need to setup our application capabilities in the manifest and download some NuGet packages. We double click on the Package.appxmanifest file in Solution Explorer and Visual Studio, go to the Capabilities tab and check the webcam and microphone boxes.


Then we download the Microsoft.ProjectOxford.Emotion NuGet package that contains some helper classes to deal with the Azure Cognitive Services. In the Solution Explorer we right click and select Manage NuGet Packages. In the search box we type “Microsoft.ProjectOxford.Emotion”. We download the package.


With the following XAML we draw a simple UI with a Button to trigger the camera.

<Page x:Class="IC6.EmotionAPI.MainPage"       xmlns=""       xmlns:x=""       xmlns:local="using:IC6.EmotionAPI"       xmlns:d=""       xmlns:mc=""       mc:Ignorable="d">

    <Grid Name="myGrid">
            <Button Click="Button_Click">Take a camera picture</Button>


At the handler of the click event we write

  private async void Button_Click(object sender, RoutedEventArgs e)

                using (var stream = new InMemoryRandomAccessStream())
                    await _mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), stream);


                    var emotion = await MakeRequest(stream.AsStream());

                    if (emotion == null)
                        await new MessageDialog("Emotions non detected.").ShowAsync();


                    var imgBrush = new ImageBrush();

                    if (emotion.Scores.Sadness &amp;amp;amp;gt; emotion.Scores.a)
                        imgBrush.ImageSource = new BitmapImage(new Uri(@"ms-appx://IC6.EmotionAPI/Assets/sad.jpg"));
                        imgBrush.ImageSource = new BitmapImage(new Uri(@"ms-appx://IC6.EmotionAPI/Assets/happy.jpg"));

                    myGrid.Background = imgBrush;


            catch (Exception ex)
                await new MessageDialog(ex.Message).ShowAsync();

In this method we’re leverging the power of the MediaCapture class that provides functionality for capturing photos, audio, and videos from a capture device, such as a webcam. The InitializeAsync method, which initializes the MediaCapture object, must be called before we can start previewing or capturing from the device.

In our exercise we’re going to put the MediaCapture initialization in the OnNavigatedTo method:

        protected async override void OnNavigatedTo(NavigationEventArgs e)

            if (_mediaCapture == null)
                await InitializeCameraAsync();


InitializeAsync is a helper method we write to search for a camera and try to initialize it if we find one.

  private async Task InitializeCameraAsync()

            // Attempt to get the front camera if one is available, but use any camera device if not
            var cameraDevice = await FindCameraDeviceByPanelAsync(Windows.Devices.Enumeration.Panel.Front);

            if (cameraDevice == null)
                Debug.WriteLine("No camera device found!");

            // Create MediaCapture and its settings
            _mediaCapture = new MediaCapture();

            var settings = new MediaCaptureInitializationSettings { VideoDeviceId = cameraDevice.Id };

            // Initialize MediaCapture
                await _mediaCapture.InitializeAsync(settings);
            catch (UnauthorizedAccessException)
                Debug.WriteLine("The app was denied access to the camera");

        /// Attempts to find and return a device mounted on the panel specified, and on failure to find one it will return the first device listed
        /// </summary>

        /// <param name="desiredPanel">The desired panel on which the returned device should be mounted, if available</param>
        /// <returns></returns>
        private static async Task<DeviceInformation> FindCameraDeviceByPanelAsync(Windows.Devices.Enumeration.Panel desiredPanel)
            // Get available devices for capturing pictures
            var allVideoDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);

            // Get the desired camera by panel
            DeviceInformation desiredDevice = allVideoDevices.FirstOrDefault(x => x.EnclosureLocation != null && x.EnclosureLocation.Panel == desiredPanel);

            // If there is no device mounted on the desired panel, return the first device found
            return desiredDevice ?? allVideoDevices.FirstOrDefault();

Let’s focus on the MakeRequest method we called in the click event handler because here we make use of the Project Oxford library to detect emotions.

private async Task<Emotion> MakeRequest(Stream stream)
            var apiClient = new Microsoft.ProjectOxford.Emotion.EmotionServiceClient("f1b67ad2720944018881b6f8761dff9a");
            var results = await apiClient.RecognizeAsync(stream);

            if (results == null) return null;

            return results.FirstOrDefault();

We need to create an instance of the Microsoft.ProjectOxford.Emotion.EmotionServiceClient class. In the constructor we pass the key obtained from the Azure portal at the beginning of this post. Then, we call the RecognizeAsync method. Here we’re using the overload with the Stream parameter because we have our picture saved in memory. There is also an overload that accepts a URL string. With this call the Azure platform is now doing its magic and soon it’ll deliver the result. The RecognizeAsync returns an array of Emotion. An Emotion is made by a Rectagle reference and a Score reference. The Rectagle instance tells us the coorindates of the face detected while the Score instance tells us the confidence of every mood that Azure can detect: sadness, neutral, happiness, surprise, fear, and anger. Based on this data we can make “ifs” to do some funny things like changing the background of our main window.


In this post we learned how to detect the current mood of our user. We achieved that by using our front camera to take a picture and then make a call to Azure Emotion API to guess if our user is happy or not. We had to set the app manifest to inform the OS that we need to use the Webcam to ask the user for the privacy settings.

If you want to learn more about the MediaCapture class, visit MSDN ( and the Azure Cognitive Services ( website. The source code of the app developed in this post is available on my GitHub (


If you have questions you can use the comment section below or you can find me on Twitter! If you liked this post, share!

Integrating Twitter with Universal Windows Platform

Twitter is my favorite social network. It is useful both for work and for fun: I read twitter often and post some tweets, too.

In this post we’re going to see how to read information from Twitter about the logged user and how to interact with the Twitter API with a Universal Windows App (UWP). Because we’re lazy we’re going to use the Linq2Twitter library available on GitHub to make our life easier.


The first step to start our work is registering the app in the Twitter Developer Portal.


Without this step we cannot access the Twitter API. In order to register our app we need a Twitter account.

With our twitter account set-up we can go to where we register our app by clicking on “Create New App”.


After that, registering the application is as easy as compiling this form


Name: the name of our application.
Description: a simple description of what our app can do.
Website: the reference website for our app.
Callback URL: the return address after a successful authentication. We do not need this in our example because we’re doing a UWP app.

At the end we agree with the “Developer Agreement” and click on “Create your Twitter application”.

If the process completes successfully we can manage our application settings in a page that looks like this (my app is called Buongiorno):


To make valid calls to Twitter API we need to use the Consumer Key (API Key) and the API Secret key. You can read the Consumer Key under the Application Settings section. To read the API Secret we need to click on “manage keys and access tokens”.


In this page we can read both API Key and Secret. We need to keep in mind that these values are sensitive information and not to publicize them because other (malicious) developers can impersonate our application and do harmful things.

Now we’re finished with the Twitter website and we can go to write code!


We open a new UWP project with Visual Studio.


We can give any name and then Visual Studio prepares for us a blank app.

The next thing to do is to import the Linq2Twitter library available as a NuGet package. Right-click on the project in the Solution Explorer and click Manage NuGet Packages.


Next we search for “Linq2Twitter” in the browse section and download the package with the download arrow icon on the right.


Visual Studio will prompt us to Accept licenses and dependencies. We click Accept and move on. The NuGet system will take care of all the download process and at the end we’ll be ready to use the library without any other click.

In the MainPage.xaml we make some basic UI to trigger the Linq2Twitter library and display the logged user timeline.

Our goals are:

· Retrieve user timeline

· Post a tweet.


The XAML code to achieve this layout is the following:

<Page x:Class="Buongiorno.MainPage"
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<RowDefinition Height="Auto" />
<RowDefinition Height="1*" />
<RowDefinition Height="Auto" />
<Button Content="Get Timeline" Name="btnGetTimeline"
Click="Button_Click" />
<ListView Name="TweetList"
<StackPanel Margin="2">
<TextBlock Text="{Binding User.ScreenNameResponse}" />
<TextBlock Text="{Binding Text}" />
<StackPanel Orientation="Horizontal"
<TextBox PlaceholderText="Hello World of Twitter!"
Name="txtUserTweet" />
<Button Name="btnSendTweet"


In the code-behind file (MainPage.xaml.cs) we’ll code our logic to leverage Linq2Twitter.

Starting from the click event of the btngetTimeLine we write:

private async void BtnGetTimeline_Click(object sender, RoutedEventArgs e)
    UniversalAuthorizer auth = await Authenticate();

    using (var twitterCtx = new TwitterContext(auth))
    var srch = await
    (from tweet in twitterCtx.Status
    where tweet.Type == StatusType.Home
    select tweet).ToListAsync();

    var observableTweets = new ObservableCollection&lt;Status&gt;(srch);

    TweetList.DataContext = observableTweets;
 catch (Exception ex)
    var msg = new MessageDialog(ex.Message, "Ops!");
    await msg.ShowAsync();


In this method we are basically: 1) authenticating to Twitter, 2) retrieve the timeline for the logged in user and display the result in the UI.

We need to focus on the Authenticate Method. It takes care of requesting to Twitter the authorization to act with the API, opening the user interface to login and save the tokens to never ask again for credentials for every API call. The tokens are saved in the local app storage: I recommend this MSDN reading for further details about app data storage. All this is done a few line of codes thanks to Linq2Twitter methods.

private static async Task<UniversalAuthorizer> Authenticate()
var localSettings = Windows.Storage.ApplicationData.Current.LocalSettings;
var auth = new UniversalAuthorizer()
CredentialStore = new InMemoryCredentialStore()
ConsumerKey = "<your consumer key here>",
ConsumerSecret = "<your consumer secret here>",
OAuthToken = localSettings.Values["OAuthToken"]?.ToString(),
OAuthTokenSecret = localSettings.Values["OAuthTokenSecret"]?.ToString(),
ScreenName = localSettings.Values["ScreenName"]?.ToString(),
UserID = Convert.ToUInt64(localSettings.Values["UserId"] ?? 0)
Callback = ""
await auth.AuthorizeAsync();
//Save credentials.
localSettings.Values["OAuthToken"] = auth.CredentialStore.OAuthToken;
localSettings.Values["OAuthTokenSecret"] = auth.CredentialStore.OAuthTokenSecret;
localSettings.Values["ScreenName"] = auth.CredentialStore.ScreenName;
localSettings.Values["UserId"] = auth.CredentialStore.UserID;
return auth;


The important steps to note are to set our app Consumer Key and Consumer Secret that Twitter assigned in the App Center where we registered our app at the beginning of this post. At the first authentication the UniversalAuthorizer will open for us the Twitter authorization UI.


At the end of the authentication process in our C# code the auth reference will hold the OAuthToken and OAuthTokenSecret in the CredentialStore variable that we save locally for future use and avoid this pop-up every API call.

The result will be something like that:


The btnSendTweet event handler implements our logic to write a tweet:

private async void btnSendTweet_Click(object sender, RoutedEventArgs e)
if (string.IsNullOrWhiteSpace(txtUserTweet.Text)) return;
var tweetText = txtUserTweet.Text;
UniversalAuthorizer auth = await Authenticate();
using (var twitterCtx = new TwitterContext(auth))
await twitterCtx.TweetAsync(tweetText);
await new MessageDialog("You Tweeted: " + tweetText, "Success!").ShowAsync();
catch (Exception ex)
await new MessageDialog(ex.Message, "Ops!").ShowAsync();


As always we need to authenticate and then call the TweetAsync method of TwitterContext to post our tweet.


In this post we learned how to do a simple custom Twitter client that reads our timeline and can write tweets. The main points were to register our app in the Twitter Developer portal, leverage the Linq2Twitter API to do the OAuth authentication and save the tokens in the local storage and make calls to Linq2Twitter API to search for timeline and to tweet.

If you want to learn more, you can refer to the GitHub project of Linq2Twitter ( and the Twitter API official documentation ( The source code of this example is available on my GitHub (

If you liked this post please share!

Sviluppo nel mondo MS (aprile 2017)

Disclaimer: non ho idea di cosa sto parlando, sono in una fase di brainstorming e potrei dire le più alte stupidaggini.

Quindi se io volessi sviluppare un’app partendo da foglio bianco nel 2017 restando nell’ecosistema degli strumenti Microsoft posso scegliere tra:

  • .NET Framework 4.6(.2): il famosissimo Framework standard e completissimo che tutti conosciamo per applicazioni e web sistema operativo Windows centriche;
  • .NET Core: evoluzione del .NET Framework (che non ho capito se si fermerà alla versione 4.6.2). Ridisegnato, open-source, per applicazioni server/console (?) multi-piatta (Windows, macOS, Linux) (ASP.NET Core). Non ci sono GUI multi-piattaforma (WPF, tipo).
  • Xamarin per applicazioni mobile multi-piattaforma.
  • UWP: Parte del .NET Core per sviluppare app che attraversano tutte le varianti del SO Windows 10 (da Enterprise a IoT, Mobile e Xbox One compresi).

Mettiamola graficamente (grazie blog Microsoft)


La cosa incredibile è che con Visual Studio e C# si possono attraversare tutte queste tecnologie. Fantastico.

Senza menzionare tencologie/integrazioni con motori grafici quali Unity (con cui fare giochi, applicazioni VR/AR) e potenza del cloud (Azure che è un mondo vastissimo solo quello).

Pollice verde

Per me il software è come una pianta.

Una delle metafore più diffuse per descrivere ai non addetti ai lavori come avviene la costruzione di un software e la sua complessità è quella di paragonarla alla costruzione di una casa. È una metafora che regge, niente di sbagliato. Tuttavia è limitata perché a un certo punto la casa finisce e diventa “rigida”. Non è più così modificabile, non è più possibile fare determinate aggiunte o togliere pezzi.

Proprio per evitare questa sensazione di rigidità io preferisco la metafora della pianta. L’inizio dello sviluppo è come piantare il seme. Poi arriva un minimo di arbusto ma perché si regga dritto è necessario aiutarlo con un bastone di sostegno. Poi si comincia ad avere un certa solidità, con un tronco e i primi rami più importanti. Il processo avviene con calma, con cura, dobbiamo amare la nostra piantina e sfoggiare le nostri doti migliori da pollice verde. Poi è un’esplosione di rami, foglie, che possono sempre crescere in ogni direzione e in ogni momento. Ma attenzione. Alcuni rami si possono ammalare, altri vanno potati. La pianta va continuamente curata, sfrondando dove c’è bisogno.

Per questo il software è come un albero. È un qualcosa che continua a mutare e il buon team di sviluppo ne conosce pregi e difetti. Sa quali sono le parti da potare e lo fa senza indugi. Sa quali sono le parti che si reggono su un sostegno e le fa crescere per renderle più robuste.

E voi? Che paragone usate per spiegare come si costruisce e mantiene un software?

Una promessa in incognito

Valutare approssimativamente il valore numerico di una grandezza. –, Vocabolario online.

Quando qualcuno ci chiede una stima riguardo a quanto ci vuole per sviluppare, ci sembra che ci stia chiedendo un’indicazione approssimativa delle ore uomo? Abbiamo l’impressione che la risposta che diamo possa essere approssimativa? Che possiamo poi cambiare idea e rifinire la nostra risposta?


Probabilmente no. Aggiungerei poi che tale risposta potrebbe esploderci in mano quando lo sviluppo non avviene entro i tempi stimati. Perché? Perché quando un manager solitamente chiede una “stima” sta in realtà chiedendo un impegno o un piano per raggiungere un obiettivo. La distinzione tra stima, obiettivi e impegni è fondamentale per capire cosa è una stima, cosa non è, e come rendere migliori le prossime che faremo.

La stima è, rigorosamente, quello che dice il vocabolario. Un obiettivo è una frase che indica un obiettivo di business che si vuole raggiungere (es.: questa funzione deve essere completata entro il primo giugno perché la mostreremo in una fiera). Un impegno è una promessa di rilasciare determinate funzionalità con uno specifico livello di qualità entro una certa data. Una stima può essere lo stesso di un impegno, può essere più aggressiva o più conservativa. Non diamo per scontato che l’impegno debba essere uguale alla stima.

Molti dirigenti/manager non hanno il bagaglio tecnico per distinguere tra stima, obiettivo e impegno. Diventa perciò dovere del responsabile tecnico di tradurre la richiesta del dirigente in termini più specifici.