Powershell for devs, part 2

April 28th, 2018

Continuing from Part1.

New to Powershell? Let me jot down a few good-to-know things. Some you already know. Some might save you hours of googling.

Below is a simple module. I will explain it row(s) by row(s).

Set-StrictMode -Version 2.0

< #
Short description.

Long description.

This does not show with Get-Help.

.PARAMETER birthDate
This does not show with Get-Help.

An example...

General notes.
function Get-PersonData
            HelpMessage='The name of the culprit.'
        [string] $name,
        [datetime] $birthDate = (Get-Date)
    $startTime = Get-Date -Format 't'
    Write-Verbose "Start:$startTime"
    "Name=$name, Date=$($birthDate.ToString('yyyyMMdd'))."

    # Call method | Filter | Sort | Return.
    $foundPerson = GetPeople `
        | where {$_.name -eq $name } `
        | Sort-Object born `
        | Select-Object -First 1

    Write-Host "foundPerson:[$foundPerson]"
    Write-Verbose "Stop:$(Get-Date -Format 't')"

    return $foundPerson

function GetPeople(){
    # Create array of key-value pairs.
    $people =

    return $people

Export-ModuleMember Get-PersonData
# Export-ModuleMember GetPeople

Use Set-StrictMode. See part1 of this blog series.

# .SYNOPSIS... #

This type of comment right before a method is recognised by Powershell and ends up in Get-Help.  Adhering to explaining the intention for your methods is considered good practice.

Inside the method is

        HelpMessage='The name of the culprit.'

This is what the parameters look like. One can set if a parameter is mandatory, some help text to be picked up by your favourite text editor, the [type] and default value.

$startTime=Get-Date-Format 't'

A variable is set to a string. Other scripting languages send strings around. Powershell sends proper objects. The Get-Date-Format converts the DateTime value to a string.

Write-Verbose "Start$startTime"

Built into Powershell is the possibility to call (almost) anything with a -Verbose flag. Only then is the Write-Verbose called. Like a simple logging level.

"Name=$name, Date=$($birthDate.ToString('yyyyMMdd'))."

Nothing strange here at first sight. Until you exeute in a console. Then you realise this string is outputed; because it is not inputed into something else, like a variable.

Also “$($variable.Method)” is the way to call methods inside a string.

$foundPerson= GetPeople `
    | where {$_.name-eq$name } `
    | Sort-Object born `
    | Select-Object-First 1

Powershell is said to be able to use Linq. This is technically true but the syntax is so weird that I have never used it. This code though has (almost) the same behaviour and is easy to read.

GetPeople is a method call. Backtick concatenate lines and circumvents that Powershell has automatic statement ending with a line end. | pipes object and not strings. where is an alias for Where-Object. The rest is… Linqish.

Write-Host "foundPerson:[$foundPerson]"

Write-Host an object like $foundPerson output the contents of the object. Not just the type as in C#.
Write-Verbose “Stop:$(Get-Date-Format ‘t’)”
This string is outputed only if the method call is made with -Verbose. Plus an exmple on how to write a method call in a string.

return $foundPerson

Finally nothing surprising. Except that return can be left out to make the code harder to read.


@ tells Powershell to create a list of key-value pairs. Also often called a hash list.

Note the comma character. That makes $people an array of key-value pairs.

Export-ModuleMember Get-PersonData

Only exported mehods are visible outside the module. It is like making them public.

Powershell for devs, part 1

April 28th, 2018

New to Powershell? Let me jot down a few good-to-know things. Some you already know. Some might save you hours of googling.

Use Set-StrictMode -Version 2.0.

Powershell 5.1 is the last Windows Powershell. From 6 it runs on Dotnet core and more platforms.

Use Pester for automatic testing. It runs tests and can mock. Note that mocking works differently in Powershell than C# as they load code in different ways.

Don’t patch together your Powershell scripts. Use the systems thinking you usally do and create a sturdy, thought out, solution with bricks of the right size. Just like you would any other solution.

For caveats let me explain the file below, row by row.

File JustASimpleScript.ps1

Set-StrictMode -Version 2.0


$temp = 'LocalVariable'

function LocalFunction( $foo, $bar ){
    Write-Host "Local variable is $temp."
    $foo    # The last executed row is returned.

LocalFunction 'MyParameter'

Enter the above in your favourite text editor. Save it as JustASimpleScript.ps1. Open a console and navigate to the proper folder. Execute Powershell to get Powershell started. Then execute .\JustASimpleScript.ps1′. The result shows both an exception and some more proper output.

Set-StrictMode -Version 2.0

Set-StrictMode is like `option explicit` in old VB6, it throws an error if you try to evaluate a variable that has not been set, that parenthesises are not used when calling functions and called methods do exist.
No code example at Stackoverflow shows it and almost no blog article.


Variables are recognised by the leading dollar sign.

As we used Set-StrictMode above this row throws an exception. But… the program continues to run!

$temp = 'LocalVariable'

Strings are delimited with apostrophes. Quotation works but is over kill, see below.

function LocalFunction( $foo, $bar ){

Nothing special about declaring a function like this. Declare a function with parenthesises but call it without; it is very easy to get this wrong. Also; the parameters are optional by default.

Write-Host "Local variable is $temp."

Write-Host is the normal way of outputting text in the console. If it is the correct way is another discussion I won’t dive into without more knowledge.

Also note the quotation marks. They mean that anything that looks like a variable inside should be evaluated. Just like PHP. Just like C# by prefixing a string with $. Many online examples use quotation marks for every string. By the time of writing I consider that less good.

$foo    # The last executed row is returned.

To fool newbies, Powershell implicitly returns the last executed row with an output. I suggest to make it more readable, prefix with return like so:

return $foo    # The last executed row is returned.

Comments starts with a # sign.


Stuff within curly brackets are a code block. A code block is not only the contents of a method or an if statement but can also be passed around, like a() => {…} lambda in C#.

LocalFunction 'MyParameter'

This ia a method call. As long as it is Powershell leave out the parenthesises. Otherwise you have converted your list of parameters to a single parameter call and the single parameters is a list. This rule is not hard to remember but reading code that calls method with lists as parameters is hard to grasp for a newbie. Adding insult to injury, calling a C# method from Powershell might change the rule.

Using a file like this, ending in ps1 is typically done by “dot sourcing”. It is quick, dirty and pollutes the global name space. Things you would never to in your “regular language”.

A call like below makes the contents live only just in the call.


If you want to make the $temp variable and the LocalFunction live on you start it with yet a period and a space like so:

. .\JustASimpleScript.ps1

But probably you want to make a module instead. Which you find in tags: | categories: Code and Development | no comments »

When you are testing, you are not testing code; you are testing intention

April 4th, 2018

I just had to get that off my chest.

How I tested *every* authorisation, authorised or not

March 10th, 2018

Well… I didn’t, not down to bit level. But I got closer than any time before as every business case was tested.

Don’t believe my rudimentary text below will simply answer how to do it; it will just give a raw landscape. It took me several nights and 3 iterations before I solved all the tidbits.

More important than to test that every Role could get to its authorised Product was to see that it could not get to unauthorised ditto.
So I had to test every combination.

Testing every permutation of User, Role and Product was not  feasible. Each entity has several properties and they are, mostly, not related to authorisation. Then we have 2^64 kinds of userId where most of them are not interesting and not even in use. To continue a test on “Role of type X or Y” is really a “Role is HiredInProductsCompany”.
So I sat down and extracted the if statements from the code. They were like “if loggedOn” and “if in Role x or y”. Every such statement was extracted as a method and moved into a (temporary) helper lib.

When I was sure, with visual inspection, I had caught everything, I put some business logic into thought and manipulated and rearranged the methods. They became fewer and matchable to business requirements. Gone was “if user.Role == Administrator and user.Company = product.Company” but instead “if user.IsAdministratorAtProductsCompany(product)”.

Note that during this process I have not changed any logic and, testing besides, present state could be shipped all the time.

Now I had to get rid of any technical remains. On the outside it looked ok as the method names where very descriptive in business lingua but inside the authorisation method was “if user.Id == 0” or “if challengedUser.Id == persistedUser.ID”. It was not usable since Id as integer is a technical (often a persistance layer construction) solution for recognising an entity. In business terms it is more like “user.IsPersisted” and “if challengedUser.SameAs(persistedUser)”. I continued redusing the problem space to what I really wanted to test when authorising.
This way I seriously minimised the permutations, as an authorisable User did not care about “Id” or “Name” or “BusinessPartner” but only “IsLoggedOn” and “Role”. With 6 roles that means 12 permutations. With Project I came to, say, 32 permutations and user 4. This gives in all 12*32*4=1500 variants. Not a problem to test every combination now if I just put some (business) intelligence into creating the tests. #win

Let’s say 240 of them were positive (autorised) and the rest negative.

I started with creating a simple lib for permutating every possible input and them through One Test to green light “not authorised”. Everything red should be authorised. Already here I might have found combinations that was authorised when they should not have been.

Then I, manually, created a list of every permutation allowing authorisation. Well… manually for a programmer is reducing to loops and ifs so one method could create every authorised combination of type A and one of type B. Alltogether that is, say, 10 different methods and some manual ones. They were concatenated to a list and I made the Test assert authorised and not-authorised according to this list.

So now I had a test for every kind of authorisation check testing both authorised and not authorised and tests looks like business logic.

Recension av lådcykel Riese-Müller Packster 80

February 12th, 2018

[Det kommer komma mer information.]

Denna cykeln.


Cykeln är löser mina problem och jag använder den varje dag och är nöjd.
Prislappen är (för) hög (men det tycker jag för en premium-bil också).
Motorn och transmissionen från Bosch är dåligt konstruerad (inte gjord för utomhusbruk eller med last).



Den är tung. Det är inte säkert det gör något.

Det är inget som känns så länge man cyklar. För mig som normal man är det inget ohanterbart när jag leder eller baxar runt den men jämfört med något som är lätt är den tung. Jag vet inte hur tung den är jämfört med andra lådcyklar.
Jag har tyvärr inte vägt den för att få en rättvisande siffra att jämföra med; en med batteriet på och lådan monterad.

Den är smidigare än den ser ut.

I tighta 90-gradershörn måste man planera lite och kanske ge sig ut i mötande fält. Vid tillfälle av 180-graderssväng är det stor risk man måste gå av och leda den fram och tillbaka för att få runt den. Att parkera den i cykelställ är som att parkera en bil, man tittar först och svänger sedan och lyckas på första eller andra försöket.
Det är inget problem att cykla eller leda med två 6-åringar.

Den vobblar när man cyklar utan att hålla i styret.

Lösningen är att hålla i styret så det är inte kritiskt men det antyder svaj i ramen. Jag har inte jämfört med någon annan lådcykel.
Jag har cyklat i 50+ km/h, utan barn, vilket känns tryggt, ingen vobbelkänsla alls. Det är bra, för jag vill ogärna ramla i den farten med barn i lådan. Med 2 6-åringar i lådan höll ja 40+ km/h och var inte det minsta orolig. Barnen tjöt av glädje och tyckte det var bättre än Liseberg.

Den ramlar långsamt.

Vi har ramlat med och utan barn. Högre fart i den förra och mer “tappa cykeln” för den senare. Barnen satt fastspända och undrade vad de skulle göra medan de väntade på att vi lyfte cykeln igen. Det kändes betydligt tryggare än att ha barn på stol på pakethållaren.
En egenskap med att ramla med cykeln är att den ramlar långsammare än en vanlig cykel. Jag kan inte säga exakt vad det är men jag upplever jag har mer tid på mig att parera och styrningsmekaniken hindrar fullt utslag så man slipper få styret i magen eller fastna mellan ram och styre.
Jag har också provocerat cyken med halka, sväng, nerförsbacke och fart till en ordentlig ramling (utan barn). Utan att kunna jämföra med samma ramling med en vanlig cykel (såå roligt är det inte att ramla) tyckte jag det kändes bra.

Man sitter inte framåtlutad.

Eftersom man inte trampar med kraft finns det väldigt lite som håller upp rumpan och ryggen. Att sadeln är fjädrad är ingen tillfällighet. Detta borde gälla alla elcyklar och mer ju mer bakåtlutad sittpositionen är.
En av orsakerna jag valde denna lådcykeln framför annan lådcykel är att denna var mindre bakåtlutad. Jag hade dock önskat ännu mer framåtlutning.

Stödet bra och lite dåligt.

Först det bra: Den är otroligt stabil. Jag låter 7-åringarna klättra i och ur den parkerade cykeln och leka i den utan att vara rädd eller ens behöva stå i närheten.
Den får dock inte stå i nerförsbacke då stödet har precis höjd för att den skall glida sakta framåt och eventuellt (jag har testat lite grann och inte råkat ut för det) fälla upp stödet. Uppförsbacke borde fungera bra. Eventuellt kan ett tjockt gummiband över bromshandtagen lösa det. Trehjuliga lådcyklar har en handbroms, kanske skulle denna ha det också. Eller lås bakhjulet.
Det är också ganska vanligt jag slår i smalbenet när jag ställer cykeln på stödet.


Behövs verkligen fjädring fram? Jag kan inte avgöra det.
Däremot känner jag att hela framhjulsupphängningen, om det är styrlager, navlager, fjädring eller broms kan jag inte avägra, glappar vid bromsning. Det är inget problem men i en känslig svängande nerförsbacke är det svårt att känna hur bromsen tar.


Roligare än svart. Tråkigare än orange.
Mycket fin röd färg på överdraget.


Walk assist är undermålig.

Den är för klen, alldeles för klen. En unge i lådan och en liten uppförsbacke blir jobbigt, speciellt som det är jobbigare att leda en lådcykel som är tyngre än en vanlig cykel och man inte vill luta.
Den orkar precis dra sin egen vikt om det inte lutar mer än någon grad uppför.
Den är oergonomisk. Ett tryck på en liten plastknapp och sedan hålla inne en annan medan man samtidigt håller handen på handtaget är jobbigt. Lägg till ett par vantar, eftersom det finns några av oss som inte cyklar bara soliga varma dagar, och man känner inte ens knappen ordentligt.

Temperaturkänslig motor.

Specifikationen är några få minusgrader för motorn.
En cykel med en motor till det priset monterad borde inte få säljas i Sverige utan en varningsskylt.
Bosch ligger i Tyskland. Där finns det garanterat minusgrader.
Jag har cyklat i 5 cm snö och det fungerar bra. Hjulen är breda, vilket är både bra och dåligt i snö, men jag antar de breda hjulen behövs p.g.a. vikten.

Temperaturkänsligt batteri.

Jag kommer inte ihåg specen för batteriet men ett batteri för flera tusen skall hantera flera minusgrader. Det gör det inte enligt specifikation.

Mycket friktion i motorn och drivningen.

Att cykla utan batteri är bara att glömma. Det är riktigt, riktigt trögt att trampa runt pedalerna utan batterihjälp.
För en billig motor hade jag varit ok med det men för en såpass dyr som detta är, är jag inte nöjd. Det borde sitta en varningsskylt: Denna “cykel” har inget praktiskt bruk utan batteri.
[Det kommer komma mer information.]

Publishing Dotnet core 2 – dependencies manifest … was not found

August 31st, 2017

If you try to run a stand alone / self contained Dotnet core 2 solution you might run into something like:

 An assembly specified in the application dependencies manifest (TestRunCore2.d
eps.json) was not found:
 package: 'runtime.win-x64.Microsoft.NETCore.App', version: '2.0.0'
 path: 'runtimes/win-x64/lib/netcoreapp2.0/Microsoft.CSharp.dll'

It might be because you haven’t published properly to get all the dotnet files.

Or, as in my case, I was standing in the […\win10-x64] folder and not in the [..\win10-x64\publish] folder.
I thought that by standing the [\win10-x64] folder, where my [TestRunCore2.exe] file was, dotnet would reach into the [publish] folder.
I then noticed that the [publish] folder contains both my exe and the dotnetcore files.

Simple connect a mongo client container to a mongodb container

August 14th, 2017

I tried this on OSX. It probably works on Windows too.

In this article we create a container with mongodb and some contents and then connect to it from another container.
Just for personal reasons the client container is really a “aspnet container” and not connected to mongodb to start with.

Even though I liked typing my way around Docker I tried and discovered the free Kitematic by Docker.
It gives me a very simple overview but I hope it to evolve some in the future to include a little more, like the git-githubdesktop-sourcetree journey.

Create a mongo database with contents

Open Kitematic and create a mongodb container.

Select Exec to get a terminal.

Now, inside the container, connect to the built-in mongo:


Just for fun, see what databases we have:

show dbs

Create two new records.

db.runCommand({insert:"projects", documents:[{_id:1, name:"Alpha"}] })
db.runCommand({insert:"projects", documents:[{_id:2, name:"Beta"}] })

The result should be

{ "n" : 1, "ok" : 1 }

for each call where “n” denotes the number of records inserted and “ok” the success.

See what we have  of databases again, to find the new database “projects”.

show dbs

Step into the database: (Is this really necessary?)

use projects

Query what we have:


Now we have a container running mongodb with data in it.
You can leave mongodb and the container but make sure it is not stopped.

Create another container

Containerising is about selecting an image and then adapting it to you needs.
I use a lot of dotnet and hence choose to select a dotnet core image.

Search in Kitematic for “aspnetcore” and select one. Which to chose can be complex; by the time of writing there are 2 from Microsoft. Which to choose is another subject and also subject to change.

When the container is started update it and then install mongodb.

apt-get update

apt-get install mongodb

Note: Updating the container with apt-get is something one probably don’t do as such tasks should be scripted. But here we are experimenting.

We don’t want the database in this container, only the client, but is easier to find an apt-get for the whole database than for just a client.

We now have 2 running containers and if you didn’t fiddle around too much they are sharing network.
The network can be inspected:

docker network inspect bridge

which results in something like:

        "Containers": {
             "2fae...eb2a": {
                 "Name": "aspnetcore",
                 "IPv4Address": "",
             "f565...3856": {
                 "Name": "mongo",
                 "IPv4Address": "",

There we have the IP addresses. Out of the box Docker containers don’t have a name resolution so we’ll use the IP address to connect.

So connect with:


and query as before:


Two connected containers, one running a mongo database and one connected to it and prepared for aspnetcore love.


git – github – source tree

August 14th, 2017

I thought of calling this article The value of tools.

I started with git at the command prompt. The threshold was high. Not only was the way of treating my precious source code files new but I also had to learn new commands and how to parse the returned text.

Then came what is now called Github desktop. It made simple tasks even simpler. It couldn’t do any hard tasks but I was perfectly comfortable with this since 99% of my tasks are simple. It is just when I mess up Git I need more horse power.

So came Source tree which made harder tasks easier. Not Easy since one has to be concentrated; and Source tree has a number a bugs and caveats that makes it not as simple to use as Github desktop.

A client that is very good for what it is good at might suffice.
To build a graphical tool for Git that is both easy to use for less knowledgable people and complete for an advanced Git user is hard, even impossible.

Alas: The solution to build a simple tool for simple tasks and an advanced tool for advanced tasks is a good idea.

Some Docker memos

August 13th, 2017

List images

docker images

Start a container and a terminal

docker run -ti --rm microsoft/aspnetcore 

–rm is used to have docker remove the container when it is stopped.

Start a container and map a folder

docker run -ti --rm -v '/my/rooted/host/folder':'/MyRootedContainerFolder' microsoft/aspnetcore 

Start a container, open a network hole, map a folder and start a process (a web server in this case)

docker run -p80:80 -ti --rm -v '/MyRootedFolder/WebApplication1/bin/Debug/netcoreapp1.1/publish':'/Web' microsoft/aspnetcore /bin/bash -c 'cd /Web; dotnet WebApplication1.dll' 

Connect to a running container

See more att https://stackoverflow.com/a/30173220/521554

To list the active processes:

docker ps

Then attach through:

docker exec -it <mycontainer> bash

 List also not running containers

docker ps -a

Remove old containers

Find the old containers

docker ps -a

Remove chosen

docker rm <container id>

There is no risk in removing running containers with above command.

See the IP addresses used by the containers

docker network inspect bridge


Aspnet core 1.1 and Visual studio 2017 running in Docker container in OSX

August 12th, 2017

I happen to have OSX as host and Win10 as virtual machine.
I have also mapped a folder so it is reachable from both OSX and Windows.
OSX has Docker installed.

Create the project in Windows/Visual studio 2017

Create a blank solution somewhere both Windows and OSX can reach. In this article I chose


Create a solution dialogue.

Whatever dotnet version is visible at the top of the dialogue is not of interesting as we are creating a solution file and not much more and then we add Dotnet core specific stuff. You can search for the template through “empty” or “blank”.

Add an Aspnet core application project. Keep the standard name of simplicity. The path is in the solution \\Mac\Home\Documents\PROJEKT\Solution1

Dotnet version is still not necessary as it refers to Dotnet framework and we are caring about Dotnet core.


Select Dotnet core 1.1. This text is written in August 2017 and Dotnet core 2 is due November. Select WebApi. Do not add docker support. It would probably not make any change but in this exercise we are targeting running the container in OSX. If you change your mind and do want Docker-for-windows support you can always do that later with the click of a button.

When the project is added compile and run to see that all cog wheels are in place and in working order.

Tip from the trenches: Ctrl-F5 compiles, starts the web server and pop ups a web browser in one click, without having to start the debugger.

Note the URL. It is something like http://localhost:2058/api/values where /api/values is something to briefly remember. See that in the browser window there is the text [“value1″,”value2”]. It is created through the Get method in class Controllers/ValuesController.cs the “normal MVC way”.

Publish in Windows

If you right click the project (=activate the context menu in the solution explorer pane on the WebApplication1 project) there is a choice “Publish…”. AFAIK it is used for Windows or Dotnet framework stuff so leave it be.

Instead we use the CLI for restoring, publishing and activating.

If you open the console in windows you cannot use the UNC path we have put the project in. So instead use pushd like so:

> pushd \\Mac\Home\Documents\PROJEKT\Solution1\WebApplication1

Then restore the files with dotnet restore. Restoring in this case means pulling in all dependencies so we have everything we need.

It will look someting like:

> dotnet restore
Restoring packages for V:\Documents\PROJEKT\Solution1\WebApplication1\WebApplication1.csproj…
Generating MSBuild file V:\Documents\PROJEKT\Solution1\WebApplication1\obj\WebApplication1.csproj.nuget.g.props.
Writing lock file to disk. Path: V:\Documents\PROJEKT\Solution1\WebApplication1\obj\project.assets.json
Restore completed in 1,58 sec for V:\Documents\PROJEKT\Solution1\WebApplication1\WebApplication1.csproj.
Restore completed in 1,81 sec for V:\Documents\PROJEKT\Solution1\WebApplication1\WebApplication1.csproj.
NuGet Config files used:
C:\Program Files (x86)\NuGet\Config\Microsoft.VisualStudio.Offline.config
Feeds used:
C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\

Now publish with publish:

> dotnet publish 
Microsoft (R) Build Engine version 15.1.1012.6693
Copyright (C) Microsoft Corporation. All rights reserved.
WebApplication1 -> V:\Documents\PROJEKT\Solution1\WebApplication1\bin\Debug\netcoreapp1.1\WebApplication1.dll

As we didn’t specify an output path we get the result in bin\Debug\netcoreapp1.1\.

> dotnet run
Hosting environment: Production
Content root path: V:\Documents\PROJEKT\Solution1\WebApplication1
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

Note the back slashes and that I never said to change OS. We have just started the web server in Windows.

Check it by opening a web browser and go to http://localhost:5000/api/values

You should have [“value1″,”value2”] as output. Nothing surprising.

Shut down the application (ctrl-c in the console) and refresh the browser to verify that we really are surfing to our site and not Visual studio and IIS(express).

If you want to go spelunking in the container try opening a terminal directly or connecting one.


Open an OSX terminal at your web project, where WebApplication1.csproj is. Typically something like


If you do a dotnet run now you get an error.

> dotnet run
/usr/local/share/dotnet/sdk/1.0.1/Sdks/Microsoft.NET.Sdk/build/Microsoft.PackageDependencyResolution.targets(154,5): error : Assets file ‘/Users/username/Documents/PROJEKT/Solution1/WebApplication1/V:/Documents/PROJEKT/Solution1/WebApplication1/obj/project.assets.json’ not found. Run a NuGet package restore to generate this file. [/Users/username/Documents/PROJEKT/Solution1/WebApplication1/WebApplication1.csproj]
/var/folders/5l/1bssc0z152s_shv8j8nb9htm0000gn/T/.NETCoreApp,Version=v1.1.AssemblyAttributes.cs(4,20): error CS0400: The type or namespace name ‘System’ could not be found in the global namespace (are you missing an assembly reference?) [/Users/username/Documents/PROJEKT/Solution1/WebApplication1/WebApplication1.csproj]

Alas restore, publish and run.

> dotnet restore
  Restoring packages for /Users/username/Documents/PROJEKT/Solution1/WebApplication1/WebApplication1.csproj…
  Restore completed in 784.3 ms for /Users/username/Documents/PROJEKT/Solution1/WebApplication1/WebApplication1.csproj.
  Generating MSBuild file /Users/username/Documents/PROJEKT/Solution1/WebApplication1/obj/WebApplication1.csproj.nuget.g.props.
  Writing lock file to disk. Path: /Users/username/Documents/PROJEKT/Solution1/WebApplication1/obj/project.assets.json
  Restore completed in 1.7 sec for /Users/username/Documents/PROJEKT/Solution1/WebApplication1/WebApplication1.csproj.
  NuGet Config files used:
  Feeds used:

Why we need to restore it again is something I haven’t grokked yet. To be honest – if I hadn’t tricked you into, unnecessarily, restoring on the Windows machine first you wouldn’t have noticed.

> dotnet publish
Microsoft (R) Build Engine version 15.1.548.43366
Copyright (C) Microsoft Corporation. All rights reserved.
  WebApplication1 -> /Users/username/Documents/PROJEKT/Solution1/WebApplication1/bin/Debug/netcoreapp1.1/WebApplication1.dll

> dotnet run
Hosting environment: Production
Content root path: /Users/username/Documents/PROJEKT/Solution1/WebApplication1
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

Open another console and curl:

> curl localhost:5000/api/values

to get the result:


You can of course open a web browser to do the same.

Don’t forget to stop the application if you want to continue as otherwise a file might get locked.

Start container

Install Docker on your mac unless you already have.
You can probably do whatever is described here on your Win10 machine with Docker for windows.
But I happen to have OSX and a virtualised Windows10 through Parallels. It is said to be possible to run Docker for windows in Parallels but then I have to fork out another 50€ (per year?) for the Pro version; and having a virtualised Windows to virtualise yet a machine creates a performance penalty also on the host OS. My machine is running warm and noisy as it is.


docker run -p80:80 -ti –rm -v /Users/username/Documents/PROJEKT/Solution1/WebApplication1/bin/Debug/netcoreapp1.1:/Web microsoft/aspnetcore  /bin/bash -c ‘cd /Web/publish; dotnet WebApplication1.dll’
Hosting environment: Production
Content root path: /Web/publish
Now listening on: http://+:80
Application started. Press Ctrl+C to shut down.

Caveat: For almost a full day I got the error message docker: invalid reference format.
which was somehow related to microsoft/dotnet:1.1.2-runtime
which was wrong because when I rewrote the very same text it suddenly started. My first guess was that there was a hidden character somewhere but I rewinded to an earlier command that had failed and suddenly it worked.

If you care already now about the parameters for docker run they are:

I did not want to make this quick start unnecessary complex by having lots of different ports so everything is running on port 80.
With that said; as Kestrel (the web server we instantiated in Program.cs runs in aspnet it talks on port 80. Then we open a hole in the container from port 80 to port 80. This latter is what -p80:80 means.

This gives us a terminal, of sorts, and something about how it receives commands.

This one cleans up after our run so there is no halted container wasting hard drive space when the container stops.

-v /User….:/Web…
This parameter lets the continer use /Web to reach folder /User… where we have put our code.

This is the name of the image we use. An image is to a container what a class is to an object.
In this article we don’t adapt the microsoft/aspnetcore image, created by microsoft especially to run aspnet core solutions, to anything but use it as it is.
As we have not specified a version we get the latest.
Microsoft has many images for different uses.

Execute a command in bash.
The command happens to be “start the web server”.

Call the web server in the container

Then open another terminal and curl:

curl localhost:80/api/values

Yay! The web server in the container runs your dotnet core web application.