Wednesday, March 11, 2015

Azure Architecture Posters

I've long been a fan of Microsoft's technical posters – they convey a great deal of information and area a great way to learn more. Microsoft has just published some updated Cloud IT architecture posters and you can download them from https://technet.microsoft.com/en-us/library/dn919927.aspx?f=255&MSPPError=-2147217396.

There are :

  • Microsoft Cloud Services and Platform Options
  • Microsoft Cloud Identity for Enterprise Architects
  • Microsoft Cloud Security for Enterprise Architects
  • Microsoft Cloud Storage for Enterprise Architects (coming soon)

I just wish I had a nice A2 or A1 colour printer!!

del.icio.us Tags: ,,

Saturday, March 07, 2015

Azure Portal Shortcut keys

The latest version of the Azure Preview Portal shipped a few weeks ago. One of the features of that new portal is that it now supports keyboard shortcuts.

Those not familiar with the new portal need to learn the concept of blades. These are UI elements which pop out to the right. Bringing up the Virtual Machines blade shows all the VMs in your account; clicking on a specific VM brings up that VM's blade. You can then open subsidiary blades e.g. to manage the VM's settings. The new portal also has notification hub where you get notification messages (that a VM you created has indeed been created). There's also a billing blade that shows you your subscriptions and how much credit each subscription has.

One of the cool features added to the updated Preview Portal is adding shortcut keys. These are single key strokes that do useful things! There are two sets of short cut keys

Hub Menu – when you are focused on the left hand 'hub' in the portal:

H – shows the portal start board (as customised by you!)

N – shows the notification hub and any recent notifications

A – shows the active journey hub – this is a starting point for any sets of blades you have opened (a VM's configuration, etc)

/ – show the search hub

B – shows the billing hub (all your subscriptions and credit left)

C – shows the Create/New Hub blade

Blade Navigation - If you have several blades open (e.g. VMs, a VM, a VM's setting) you can navigate between the individual blades (i.e. between the blades in an active journey)

J – move the input focus to the prior blade

K – move the input focus to the next blade

F – move the input focus to first blade

L- move the input focus to the last blade

This makes navigation around the Preview Portal easier.

Friday, February 27, 2015

Using SSH with Azure Linux Virtual Machines

Accessing Windows VMs in Azure is pretty straightforward – after you create the VM, you can download an RDP file from the Azure portal and remotely administer the VM. You can also access your VM via PowerShell, although that is a bit more complex due to the need for certificates. But what if you are using a Linux VM?

Turns out – it's pretty easy. The Azure documentation article, at http://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-linux-use-ssh-key/  shows you how to do it. The steps are pretty simple, but will vary with the Linux distro you are using.

The basic way is to first generate the SSH keys. The easiest way to do this is to load and  then using openssl to generate an X.509 certificate, Then connect with Putty.

You can also forego creating keys, and just using a password. You set the password when you create the VM, then just login using putty. When you do, it' looks a bit like this:

del.icio.us Tags: ,,,

d

So for those of you who a) want to learn Linux and b) struggle with loading it on your own hardware – Azure provides a simple way to create then use a Linux VM.

Updated Azure Module Is Released

I am just back from teaching Azure and Office 365 in Belgium where we used the latest version of the Azure Cmdlets. The new version, 0.8.14 contains a huge number of updates and improvements. The changes include:

  • New StorSimple commands in AzureServiceManagement mode.
  • Updated HD Insight cmdlets
  • New Azure Insights cmdlets in AzureResourceManager mode
  • A new cmdlet for Azure VM, Get-AzureVmDSCExtensionStatus to get the DSC status for a VM
  • A number of new Azure Automation cmdlets in AzureResourceManager mode

I like that Azure is becoming more DSC aware – I am really excited about seeing DSC being fully implemented in both Windows and Azure.

To get this new version, you can either use the Web Platform Installer (which allows you to install more than just the new module).  Or, you can go to Github, and get the stand-alone version from the Azure-PowerShell repository (http://az412849.vo.msecnd.net/downloads04/azure-powershell.0.8.14.msi). The latter is an MSI that installs the updated module. Note that if you are running PowerShell already, then the MSI could ask you to either close those windows, or reboot to get the new module fully installed.

This update shows the sheer pace at which Azure is being updated. I find it staggering when you compare it to some earlier MS development cycles (e.g. the 5 years between NT4 and Windows 2000). The really good news is that Azure is getting richer and better by the month. The downside is the sheer difficulty IT Pros may have keeping up with this rapid pace of change. All in all, I think this is really not a bad problem to have!

Monday, February 23, 2015

Your Nearest Azure Data Centre

When designing an solution involving any cloud vendor, you need to be aware of the network latency between your users and the cloud. For Azure, this means putting your objects (VMs, web sites, storage, etc) in the data canter closes to you. But which one is that?

I just came upon a neat page that can help: http://linkis.com/azurewebsites.net/Jaybw. This page plots a nice looking graph of the latency between the client and the 15 existing Azure data centres around the world.  After a few tests, my graph looks like this:

image

There's also a nice table that goes along with the graph:

image

As you can see, the latency between my office and Azure favours the Western Europe data centre in Amsterdam. I had expected Dublin (North Europe) to be faster or at least very close – but was in fact slower. . Not surprisingly, Brazil, Japan and Australia are a lot further away. I was also surprised that the times to South East Asia  and East Asia were faster than both West US and East US.

What do your tests show?

del.icio.us Tags: ,

Saturday, February 21, 2015

Azure Cmdlets and the Debug Switch

So here is is, late on a Saturday and I'm off in the morning to teach an Office 365 class. In a fit of overexcitement, I decided to try to build out the lab environment on my laptop before travelling to the class. The class is run as a long-day boot camp and in the past the start of the course has been challenging with issues relating to building out the environment. I hoped to avoid that this week!

This class is one of the first from Microsoft to utilise Azure. The student's "on premises" data centre is in Azure, and the course looks at adding Office 365 into the mix. It's a fantastic idea – the student just runs a few scripts and hey presto – their live environment (DC, Exchange, etc, etc, etc) is all built by script as part of the first lab. And amazingly – this just takes around two hours according to the lab manual. What could possibly go wrong?

Well – the last time, those two hours turned into 6 as we had some errors at the start with the first script not running properly. Of course, I was able to troubleshoot and fix the issues although it did take time. So I thought it might be a clever idea to re-test the scripts this time and maybe avoid the issues.

I downloaded the first VM and installed it onto my Windows 10 client, then started running the build scripts. All was going well, till I tried to run the script to create the DC. For several hours, each time I try to create the VM, i would get error: "CurrentStorageAccountName is not accessible. Ensure the current storage account is accessible and in the same location or affinity group as your cloud service." All the normal methods (i.e. looking at Google and reading the articles found) did not help. None of the suggested fixes were appropriate to me. After two hours I was stuck.

Then, by this time on the third page of Google results, I came across a trick that should have occurred to me earlier. Use the-Debug Switch on the call to New-AzureVM (the cmdlet that was failing). I was able to see the HTTP traffic which relates to the underlying REST management api  leveraged by the Azure PowerShell cmdlets.

What I saw was the client asking for my storage account details – but claimed that the storage account did not exist. On close inspection, I could see the storage account name being requested was almost, but not quite, the correct ID. In creating the storage account, the script asked for user input (partner ID and student id), then created a storage account with a unique per/student name 0 a nice touch. In my case, I entered a partner number beginning with a 0 (zero). And looking closely at the script, part of it strips off the leading zero and used the zero-less storage account name for later operations – and of course that fails. To fix things, I just removed everything created thus far from Azure, and re-ran the scripts utilising better input and away I want.

There are two lessons here: first (as I tell all my students in almost every PowerShell Class): all user input is evil unless tested and proved to be to the contrary. If you are writing a script that accepts user input, code defensively and assume the user is going to enter duff data. The script should have checked to ensure the partner id entered did not start with a zero – it's a production script after all. Of course, I probably should have, and eventually did use a partner number not starting in Zero. So the underlying cause is user error. Still a good lesson is that, given a chance, most users can do dumb things and you need to write scripts accordingly.

The second one is the value of the –Debug switch when using any Azure PowerShell cmdlet. There can be quite a lot of output, but the ability to see what the cmdlet is doing can be invaluable. In my case, it took only seconds to work out the problem once I'd seen the debug output from the call to New-AzureVM. I recommend you play with this, as you get more familiar with Azure PowerShell – it can certainly help to in troubleshooting other people's scripts!

Thursday, February 19, 2015

Azure Preview Portal - Improvements

A few days ago, Microsoft shipped a new version of the Azure Preview Portal – one of the two GUIs you can use to manage your Azure accounts, subscriptions and assets. The new preview portal has been in development for a while. I am hoping that sooner rather than later we'll have just one portal. But in the mean time, the improvements to the new portal are most welcome.

Leon Welicki, a PM in the Azure Websites team has written a great article about the new portal and describes the new features – and their are a lot. See his blog article at: http://azure.microsoft.com/blog/2015/01/29/announcing-azure-preview-portal-improvements/

There are a lot of new features!

del.icio.us Tags: ,

PowerShell V5 – Feb 2015 Preview

Continuing with the approach of regular updates to PowerShell V5, the PowerShell team yesterday published a new version. You can read about the version in the PowerShell team Blog: http://blogs.msdn.com/b/powershell/archive/2015/02/18/windows-management-framework-5-0-preview-february-2015-is-now-available.aspx.

The download can be found at the Microsoft Download center: http://www.microsoft.com/en-us/download/details.aspx?id=45883. As noted in the team blog – this new version is only installable on Server 2012, Server 2012 R2 and Windows 8.1. I will also try it out on my latest build of Windows 10 and will report back once I get a chance to try it.

The download site has three .MSU files (one for each of the OS's mentioned) plus a set of release notes. Be careful as the file names are names of the update file are similar! The download is not overly big (circa 10mb for each msu) and takes but a minute or so to install. But, since I had an earlier preview version loaded, the installation required a reboot.

del.icio.us Tags: ,,

Wednesday, February 18, 2015

Free ebooks from MS Press on Azure

Microsoft Virtual Academy and Microsoft Press have joined forces and have issued a number of free e-books on Azure – you can download them from the web (as PDF) and enjoy them on your PC/tablet/Phone/etc. You can get the full set of books from here: http://www.microsoftvirtualacademy.com/ebooks#azure.

Not sure how long

books will remain free – and how many of them are still up to date. Given the fast pace of Azure development, these books are almost out of date before you get them. But having said that, they are still worth reading.

At present I am looking at the book: Rethinking Enterprise Storage: A Hybrid Cloud Model. Although the book is now 18 months old, there is some good thinking here. It's certainly helped me to re-evaluate how storage works in a hybrid model and why that model is so useful for my customers.

So get reading!

del.icio.us Tags: ,,

Tuesday, February 17, 2015

Azure IP Ranges

If you are setting up firewall exclusions related to Azure resources, it helps to know the Azure Dataenter IP address ranges. Turns out – that's really pretty easy: just download the details from the Microsoft Download Centre. Go here, the Azure Site and download that list.  The actual deep link to the XML document containing the IP ranges is: http://go.microsoft.com/fwlink/?LinkId=390343. Speaking personally, I found that deep link a bit hard to see on the Datacenter IP Ranges page.

The list that can download from Microsoft contains the all Compute IP address ranges (including SQL ranges) used by the Azure Datacenters around teh world. Each section of the XML document specifies a geographic region and the IP address ranges associated with that region.

The download is an XML document containing the current IP address ranges for all Azure data centres around the world, except China. The document looks like this (when viewed from PowerShell ISE):

image

The Windows Azure Datacenter IP Ranges in China are separately defined. The download centre enables you to download a separate list as the Chinese data centres are operated by 21Vianet.  You can get this document from here: https://www.microsoft.com/en-us/download/details.aspx?id=42064. It looks like this:

image

These IP address lists are published weekly. Microsoft also go on to make a good security point: Do not assume that all traffic originating from these IP address ranges is trustworthy!

Monday, February 16, 2015

Studying for Azure Exam 70-533?

If so, here's a study guide: http://vnext.azurewebsites.net/?p=10381. It looks at each area covered in the exam and looks at the specifics of what is covered there.  For each objective, there are some links to more information about that area.

And, as the article points out, don't forget that if you take this exam before May 31st 2015, you get a free re-take of the exam should you not pass it first time.

del.icio.us Tags: ,

Friday, February 13, 2015

Azure Backup Improved

In keeping with the near constant stream of improvements to all aspects of Azure, Microsoft has just announced some updates to Azure Backup.

Previously, there were several limits, including;

  • Azure Backup uses only one single retention policy to back up the data. This was limiting,
  • The number of backup copies is limited to 120. Daily backups, therefore only cover last 3 months.
  • Azure Backup does not provide the option of sending the data over network bandwidth alone to the end customer. Another limit.

But today, MS announced some major changes to Azure Backup.

  • You can now set multiple retention policies on backup data. The impact of this is that backup data can now be stored for multiple years by maintaining more backup copies near term, and less backup copies as the backup data becomes aged.
  • The number of backup copies that can be stored at Azure is increased to 366 – a 3 fold increase. 
  • Azure Backup integrates with the Azure Import service to send the initial backup data to Azure data center. This capability will enable the customers to ship the initial backup data through disk to the nearest Azure data center. This can be a significant benefit if you want to backup a LARGE amount of fairly static data.

The latest update also fixes some older issues, like not being able to backup >  850GB, etc.

del.icio.us Tags: ,

Tuesday, February 10, 2015

Azure Premium Storage

Before Christmas, Microsoft added a bunch of new features to Azure. One that I've only really just noticed was Premium Storage. Azure provides several types of cloud storage (blogs, queues, tables and files) – with the files storage still in preview. Premium storage too is in Preview.

The basic idea if Azure Storage is that you can store your data in the cloud: whether that data is the VHD drive for an Azure VM, a message queue used to hook up different parts of an application, or whatever. Azure storage is a fundamental building block you use to create cloud computing systems based on Azure.

The new premium storage feature now allows you to store this data on SSD disks. This new storage option provides higher performance and lower disk latency. Not only that, but, at least during the preview, Microsoft offered three types of SSD: P10, P20, and P30. These disks are 128gb, 512gb and 1tb respectively. The bigger disk provide more IOPS and greater throughput.

I look forward to playing a bit more with Premium Storage! I suppose it goes without saying: you can easily use Azure PowerShell cmdlets to manage this storage. I hope to generate a few scripts to demonstrate this!

For more details on Azure Storage, See Sirius Kuttiyan's recent blog post. For fuller details on Azure Storage pricing, see: http://azure.microsoft.com/en-gb/pricing/details/storage/. Premium storage is still in preview, and is offered at a bit of a discount. The 128GB  P10 disk, for example, is £5.47/month, while the 1TB P30 is £37.54/month.

Monday, February 09, 2015

Another Way to Access Azure VMs with PowerShell

In a blog post over the weekend, I demonstrated how you can access an Azure VM from an on-premises workstation using PowerShell. To summarise this approach: you first need to trust the management certificate provided to the VM by Azure, then use Enter-PSSession using the DNS name of the service, the port opened for PowerShell management endpoint and explicitly requesting SSL. While it took me a bit of time to work all this out (getting the cert downloaded was my biggest initial stumbling block), the approach is fairly simple.

But, like almost everything in PowerShell, it seems, there is yet another way to do this. No sooner had I posted that blog article when I got a tweet:

image

And – the answer was, at that point, no. I'd not noticed that cmdlet! I don't know all of then circa 500 cmdlets (yet)! But it was a nice prod to take a look. Johan's suggestion was a good one as the coding is simpler. Using both methods, you need to create a credential object to pass to the remote machine (specifying the UserID you created when you first created the Azure VM, or some other administrator you have created on the remote Azure system). And you need to trust the certificate the Azure machine presents when negotiating SSL between your client system and the Azure server. Once you have those two done, then you can   enter the PSSession like this:

$PshEP = Get-AzureVm cookhamlo | Get-AzureEndpoint  |
        
Where name -eq 'PowerShell'
Enter-PsSession -ComputerName $VmName -Port $PshEp.Port –Credential
         $VmCred –UseSSL

Using Johan's suggestion, the coding would look like this:

$Azuri = Get-AzureWinRmUri -ServiceName $VmName                   
Enter-PsSession -ConnectionUri $Azuri.AbsoluteUri -Credential $VmCred

Having tried them, both approaches work perfectly fine (assuming you have a valid credential object and trust the remote system's management port certificate).  both approaches work, but it feels like the second approach to be easier.

Saturday, February 07, 2015

Accessing Azure VMs with PowerShell

I've been doing quite a lot of work recently with Azure VMs and PowerShell. I've written some scripts to create a VM and to automate some common management functions on those Azure VMs (like re-size the VM, etc). I'm slowly posting the actual scripts to my PowerShell scripts blog (http://pshscripts.blogspot.co.uk). In the work I'm doing and in conversations with customers, many of the Azure VMs being created are stand alone. I see this as a great on-ramp to Azure, allowing the customer to dip their toe into the Azure water, for pretty minor costs. Once they are happy, they can move on to linking their on-premises infrastructure to the Azure Cloud. But in the meantime, those Azure VMs need to be managed – and of course, that means using PowerShell!

One of the first scripts I wrote was to create a simple, stand-alone VM in Azure. It took me quite a while, mainly because Azure is different to on-premises Hyper-V. I had to go through a small Azure learning curve. The first thing I want to do after creating an Azure VM is to manage it with PowerShell using remoting. You can do that, but there are a few barriers in the way (that would largely be absent in an on-premises Kerberos based authentication model!).

When you create an Azure VM, Azure creates a PowerShell endpoint to enable PowerShell management. Since the Azure VM is not in your on premises AD, you can certainly use Ntlanman authentication. (and authenticate against the in-VM user database). You need to provide a PowerShell credential object when logging into the remote session. But you know how to do that!!

The second issue is, since the VM is in a different secuity realm, the need to do mutual authentication upon creating the PowerShell session. For non-domain joined systems, this means using SSL. Yes there are ways around this or to simplify things, but there are security risks in doing so – so I'll stick with SSL. To use SSL, you need to trust the SSL certificate offered up by, in this case, the Azure VM.

When you create an Azure VM, you can either provide a certificate (e.g. issued by a CA you trust), or Azure can create a VM self signed certificate. To use the self signed cert, you need to trust the cert. To do this, you just Import the VM's defaul cert into your local host's trusted root store. Once in place, your system will trust the VM and complete authentication successfully

I've automated the task of importing the Cert with the the Install-WinRmAzureVmCert function, which I posted tonight on my scripts blog. This script defines a function that takes an Azure VM name and Azure service name, and installs the VM's default self signed cert into the local host's Trusted

$Vmname = 'CookhamLO.CloudApp.Net'
# Get the relevant Azure Subscription and set it
$SubscriptionName = (Get-AzureSubscription)[1].subscriptionname
Select-AzureSubscription $SubscriptionName
# And now install the cert
Install-WinRMAzureVMCert -CloudServiceName Cookhamlo  -VMName Cookhamlo

And once that is done, I just get the credential (I'll leave that up to the reader!), get the remote management end point to get the TCP port number to use for PowerShell remoting then call Enter-PsSession. Like this:

imageAs you can see at the bottom of the screenshot, I just use Enter-PsSession and am able to access the remote VM.

This is all pretty easy, albeit complicated by the security architecture of Windows and Azure. It's nice to know one can create a very secure remoting tunnel to your Azure VMs and manage them simply – just using PowerShell.

del.icio.us Tags: ,,

Wednesday, February 04, 2015

Creating Help Files for PowerShell – Sapien PowerShell Help Writer

In PowerShell you can get Cmdlet/Script/Function help in a couple of ways. First, you can use Comment Based Help – just put in a few carefully scripted comments and the Get-Help engine can provide help. This is fine for Advanced Functions, but if you want anything richer, including real cmdlet help text, you need to use  MAML (Microsoft Assistance Mark-up Language. MAML is a variant of XML, and in my experience is almost impossible for normal people to write using just Notepad (or other text editor).

There have been a few GUI-based tools over the years that have purported to do this – but in my exper5inence none of them have ever worked. I suspect Microsoft has some internal tools, but these have not been released outside Microsoft. Well – Until now that is!!

In her blog article entitled Introducing PowerShell Help Writer 2015, June Blender announces a new tool, PowerShell Help Writer (PHW). PHW is a new tool, developed by Sapien. This stool does a variety of things including being a fully feature Help tool that makes it easy to write and manage complete and more complex Help topics.

June's blog post has some more detail on the product, and you can find out even more by going over to Sapiens's web site for PHW: http://www.sapien.com/software/powershell_helpwriter. Sadly, the tool is not free (it costs US$49.00). It's disappointing at one level – at that price, casual scripter's, small business, etc are unlikely to pay for the product. I continue to believe this tool, or something like it, should be produced by Microsoft and as part of PowerShell or PowerShell ISE.

Still, if you are writing enterprise cmdlets, or commercial products, then this tool is almost a given. Unless you are one of the very, very few who can write MAML!

del.icio.us Tags: ,,,

Tuesday, February 03, 2015

Microsoft Cloud Platform Roadmap

In a nice change from the earlier Cone of Silence approach, Microsoft has begun to publish a detailed road map for their cloud platform. Published at http://www.microsoft.com/en-us/server-cloud/roadmap/recently-available.aspx?TabIndex=2, the road map shows features that have recently been released, are in public preview, in development and cancelled.

This means we can now see what MS are planning, and have begun to roll out in preview. Nice touch!

del.icio.us Tags: ,

Monday, February 02, 2015

More Azure VM sizes Available

At the beginning of January, Microsoft announced the general availability of bigger Azure VM sizes. The G-Series proved more RAM, more logical processors combined with lots of SSD disk storage – these VM sizes should provide extraordinary  performance. The G-Series currently scales from Standard_G1 (2 CPU, 28 GB ram, 412 SSD), through to Standard_G5 (32 CPS, 448GB Ram, 6.5TB SSD). http://azure.microsoft.com/blog/2015/01/08/azure-is-now-bigger-faster-more-open-and-more-secure/ provides more capacity details for these new VMs.

These VM sizes are ideal for large database servers – not only SQL Server, but also My SQL, MongoDB, Cassandra, Cloudera, DataStax and others. Each VM can also have up to 64 TB of attached data disks! To help the compute grunt of these VMs, they feature the latest Intel CPUs (Intel® Xeon® processor E5 v3 family) and DDR4 memory.

When Microsoft first announced these new VM sizes, availability was restricted to the West US Azure region (but they were clear that they were working to add support in additional regions).  To help me find where I can get a particular VM size, I wrote a script function to find out which regions hold a particular VM size. Here's the function:

Function wherecaniget {
  [CmdletBinding()]
  param ($VMSize)
  # Get Locations
 
$Locations = Get-AzureLocation
  # Where is that VM size?
  Foreach ($loc in $Locations) {
    $ln = $loc.DisplayName
    $rs = $loc.VirtualMachineRoleSizes
    If ($rs -contains $VMSize) {$ln}
  }
}

I notice that these new VM sizes are now available in the East US region as well.

image 

The rate of change is just awesome!

del.icio.us Tags: ,

Sunday, February 01, 2015

Adding an Endpoint to an Azure VM

In a recent blog article, I showed you how you can create an Azure VM. If you are familiar with Hyper-V and the Hyper-V cmdlets – the approach is a little different (since you configure Azure differently). One aspect of creating a VM is how you, in effect, open ports to an Azure VM.

With Azure, you create an end point that, in effect, creates a public port and an internal port for your VM. You create one port, which is the public port (on the Azure service) and an internal port for the VM. The approach to doing this is pretty straightforward, although it's a pipelined pattern:

Get-AzureVM -ServiceName $ServiceName -Name $VmName  |
  Add-AzureEndPoint -Name "Http" -Protocol "tcp" -PublicPort 80
-LocalPort 80  |
    Update-AzureVm

This pattern has you first get the Azure VM object, to which you add an Azure endpoint. Then the updated object is piped to Update-AzureVm which adds the new endpoint to the VM. Of course, if the end point already exists, this pattern would throw an exception. With Add-AzureEndPoint, you specify the protocol, and public and external ports. This 'one-liner' creates a public port and an internal (local) port to enable the VM to serve HTTP traffic.

I've created a simple script, over on http://pshscripts.blogspot.com, that implements a New-HttpVmEndpoint function which you can use to add a new HTTP endpoint to a virtual machine. This script omits some error handling you might wish to add in. For example, you might want to check whether the end point already exists. or whether the VM and VM service exist. You could obviously extend that script to add other endpoints (eg HTTPS).

Saturday, January 31, 2015

Office 365 Plans

Licensing with Office 365 is a bit different than normal on-premises software. With Office 365, rather than purchase a SKU, you subscribe to a Plan. A plan represents a set of services the user of that plan receives. In the early days of Office 365, the plans and the ability to move between them was problematic. If you took out a small business type plan and then wanted to move that subscription up to an Enterprise plan – you couldn't. It was all a bit messy. Then MS did two things: they simplified the plan structure, and enabled you to mix and match with the plans.

The new plan structure for Office 365 contains 6 separate plans:

  • Office 365 Business Essentials
  • Office 365 Business
  • Office 365 Business Premium
  • Office 365 Enterprise E1
  • Office 365 Pro Plus
  • Office 365 Enterprise E3

The Office 365 Business/Pro Plus plans are just the on-Prem fully installed set of Office software products. This includes Word, Excel, PowerPoint, Outlook, Publisher and One note). The Pro Plus plan adds Access. These plans are a way of subscribing to Office, versus outright purchase. These two plans just include the software – there's no on-line services or server software. These plans appeal to organisations that are looking to spread the cost over time. Once nice aspect of these plans is that you can load Office on up to 5 systems (e.g. laptop, desktop, home, etc).

The Business Essentials and Enterprise E1 plans are, in effect, online Office (including the Office Online Apps, plus and Exchange mailbox, file and storage space, plus both SharePoint and Lync. The E1 plan has a few added features appropriate to larger firms, such as compliance, BI and enterprise management of applications.

Finally, the Business Premium and E3 plans are the combination of the first two: full office plus mail, SharePoint, Lync. The E3 plan as above gets a bit more.

For fuller details of precisely what each plan offers, see https://products.office.com/en-gb/business/compare-more-office-365-for-business-plans.

What this now means is that any organisation can mix and match any of the plans. The restriction is that a a given subscription is limited to 300 seats of less of the business plans. So you could give the Business/ProPlus subscription to the road warriors who need off line access, while giving Business Essentials to in-house staff could use Office in the cloud. This offers a lot of flexibility.

del.icio.us Tags: ,

Friday, January 30, 2015

PowerShell Patterns and Azure Cmdlets

Get-Process Power* As anyone learning PowerShell quickly finds out, there are a number of basic patterns of usage that work well across a huge range of PowerShell cmdlets/modules. One of the most basic pair of patterns I try to teach, by way of example here, are early vs late filtering. The following two commands accomplish the same result:

Get-Process Power*
Get-Process | Where-Object name -like Power*

I wrote a little script to measure the performance difference between these two (http://pastebin.com/4N2YYqnZ). Running each of these 1000 times, the result was that early filtering was 3 times as fast as later filtering. So as a general rule – always early filter. BUT: sadly some cmdlets that emit useful objects do not support early filtering, so the later filtering pattern always works. I try to teach both of these patterns and why they are different since so many scripts can make use of these patterns.

Which brings me to Azure. In much of my PowerShell use, there is a cmdlet to set some value for some object or perform some operation on an object. For example, to change the number of CPUs in a Hyper-V VM, you use the Set-VM cmdlet, like this:

image

To do this in Azure is a bit more complicated. To change the number of processors in an Azure VM, you need to change a different property on the Azure VM object. Plus, the say you do it is different. With Azure, each VM runs in a given instance. The instance determines the number of CPUs and memory each Azure VM gets. So I would expect to do something like this:

Set-AzureVM –name xxx –service xxx –instancesize 'extrasmall'

Unfortunately, that cmdlet does not exist, and that wouldn't be the right way to do it anyway. The good news is that in Azure, there is a specific cmdlet to change the instance size (Set-AzureVMSize). So Azure VMs, the most common pattern is like this:

Get-AzureVM –VM psh1 –Service psh1 -Verbose |
  Set-AzureVMSize -Instancesize $VmSize   -Verbose |
     Update-AzureVM -Verbose

A different pattern. Unlike Set-VM, Set-AzureVMSize takes an Azure VM object (not the name of the VM). Secondly, Set-AzureVMSize does not persist the updated instance size value – it just updates an in memory object relating to the VM. To persist the value you have to write it back to Azure using Update-AzureVM. While you can do it step at a time, and avoid using the pipeline – in this case, using the pipeline makes seems easier.

In the Azure VM cmdlets, we see a great deal of use of the Pipeline based patterns. For example, to create an Azure VM, you first create a VM config object (Technically, New-AzureVMConfig produces an object of type Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVM) which you then pipe to New-AzureVM (see http://tfl09.blogspot.co.uk/2015/01/managing-azure-vms-with-powershell.html). This general pattern of getting or creating an in-memory object, updating that in-memory object typically via the pipeline and finally persisting it to Azure is easy to apply.

When you start to use Azure you quickly find that while many of the features of Azure are similar to those in Windows – in this case, Azure VMs – the methods you use to manage the features does differ. You  have different patterns of usage you need to learn and understand.  When I explained this to a class this week, I got the obvious question: why? Why is it different?

Since I was not on the dev team, I can't really answer that. But, I suspect the main technical reason is that the Azure VMs are, in effect, a wrapper around the REST API exposed by Azure. I am guessing it was easier to write the cmdlets using this pipeline pattern (or the cmdlet's approach of passing a VM object vs a VM name). Having said this, incorporating the Get-AzureVM and the Update-AzureVM cmdlets inside the Set-AzureVMSize cmdlet would have not been that difficult. But that wasn't what MS implemented.

So go with the flow; learn the patterns and enjoy Azure!

del.icio.us Tags: ,

Updated Azure PowerShell Module

I've been playing a lot with the Azure cmdlets and noticed that there's an updated version of the module. My recent Azure related posts have been based on the version of the tools Ioaded before the New Year:

image

If you got to the download page, http://go.microsoft.com/fwlink/p/?linkid=320376&clcid=0x409, and run the platform installer, you see an updated version of the tools is available:

image

The download is around 13 mb and took around 45 seconds to download and install. After installation:

image

Sadly, there appear to be no release notes, or readme.txt file in the installation folder. Given the huge amount of change, better release notes would be useful. AND it would be nice to have the latest module available via PowerShellGet. We can hope!

del.icio.us Tags: ,,

Thursday, January 29, 2015

Using Azure – Get a Free Month's Trial

I've been doing a lot of work around Azure of late and have been running Azure training around Western Europe. Most of the folks I see in classes are new to Azure – it's an unknown that is challenging their existing approach to computing (i.e. everything on premise, each app running on it's own physical hardware). In our classes, we give the students access to Azure and get them to use Azure backup, let them create web sites, VMs, virtual networks, etc. In one class, we have the students build out an 'on premise' environment (in Azure!), then use that 'on premise' environment to integrate with Office 365.

What I am seeing is that Azure is fairly easy, but very different in places. Certainly, the exposure to Azure in class gets the attendees over the initial learning curve and allows them to play. The key feature of all this playing is the free Azure Pass we give each student. But what if you were not able to attend the training but still want to play with Azure.

The answer is easy: if you live in the UK, go here: http://azure.microsoft.com/en-us/pricing/free-trial/?WT.mc_id=A2FC1A0FA and sign up for a free Azure Pass. From the UK, at least, you go there, sign up for free and get £125 worth of Azure credit. Signup does require a credit card, but Microsoft states: "We use the phone number and credit card for identity verification. We want to make sure that real people are using Azure. We do not bill anything on the credit card".

Should you exceed the credit amount, your new Azure account will be suspended. At any time, you can optional upgrade the trail to a Pay-As-You-Go Azure subscription. BUT you will not be billed anything if you use your credits and let the subscription expire.

This trial is not just available here in the UK: it's available in 140 countries around the world. See the FAQ section of the free trial page to see the countries where it's available.

So if you have not used Azure and want to experiment, go and sign up. And join in the fun that is Azure. You can even use the scripts I've posted here to play with PowerShell and Azure! So what are you waiting for??

del.icio.us Tags: ,

Tuesday, January 27, 2015

Azure Networking Fundamentals- MVA Course

I just noticed that the Microsoft Virtual Academy has a new course: Azure Networking Fundamentals for IT Pros. It's narrated by Aaron Farnell, and consists of 4 modules comprising of 4 modules taking around 120 minutes (including assessment time).  The MVA rates the technical level as 300 – which is fairly deep but not overly deep!

This MVP course consists of 4 modules:

  • Intro to Azure Network Basics and VPN requirements
  • Plan and Design your Cloud network infrastructure
  • Configuring Azure and On-Premises
  • Testing Connectivity and Monitoring

This is a good start to understanding Azure Networking! And like all MVA courses, you can download the MP4s of each module and take then on the road with you.

del.icio.us Tags: ,,,

Monday, January 26, 2015

Azure and Compliance

Over the past 6 months, I've been conducting quite a lot of cloud technology training, particularly Azure and Office 365. I've been speaking to a number of European MSPs in the SMB space who are now looking to take on Azure as a platform for their customers, to some degree replacing their old historical favourite, Small Business Server. SBS (rip) was a great platform for the small business – cheap, comprehensive and relatively easy to manage. But it's gone and not coming back.

When extolling the virtues of the cloud, I hear a number of objections – some valid, some possibly less so. One objection I hear to Azure revolves around compliance. For customers in compliance-affected businesses, compliance is not an option.

It's clear that Microsoft recognise the need to have Azure seen as a product that can comply with most, if not all, of the world's compliance regimes. It was comforting, therefore, to read Lori Woehler's recent blog article about Azure and compliance.

In her article, she notes that Azure has recently completed successfully an audit against ISO/IEC 27001:2013. carried out by the highly independent British Standards Institute Americas. BSI also validated that Azure was compliant with the ISO 27081 code of practice for protection of Personally Identifiable Information (PII) in clouds.

Woehler goes on to note that Azure has expanded the services in scope for SOC 1 and 2, the US Department of Health and Humans Services has granted the US FedRAMP authority to operate to both Office 365 and Azure AD, Azure Government is one of the first cloud platforms to meet US. Criminal Justice Information Services certification requirements for state and local governments. She mentions other, non-US compliance initiatives for Azure, including Singapore Multi Tier Cloud Security (MTCS's first Level 1 end to end cloud service offering), and the Australian Government Information Security Registered Assessors Program (IRAP) accreditation.

These, and the other Azure initiatives mentioned, should  help to bridge the confidence gap (as well as enabling Azure to be used in many compliance-bound industries. And this work just keeps going on, both to comply with new and additional compliance schemes as well as to re-certify on a regular basis. Azure is changing on what appears to be a weekly basis – the compliance certifications need to keep pace.

Hopefully, this continuing effort will go a long way towards assuaging at least some of the concerns of the SMB market space.

del.icio.us Tags: ,

Friday, January 23, 2015

Creating Azure VMs with PowerShell

I've been playing for the past few weeks with Azure VMs. I can, obviously, do most things from the GUI, but somehow, as a PowerShell guy, that just seems wrong! So I've been honing up my PowerShell Skills. The Azure module contains over 500 commands. In the version on my home workstation, there are 509 commands, of which 35 are aliases.

As an aside, those aliases are use to alias the old 'windows azure pack' cmdlets which have in effect been replaced with newer cmdlets. For example, the old Get-WAPackSubscription is now an alias for Get-AzureSubscription. This is a great idea for helping to ensure forwards compatibility. I suspect we'll see more of it. For those of you with older Windows Azure Pack based scripts – you may consider upgrading those to use the new Azure Module native cmdlets.

The first thing you have to know about Azure VMs is that, while they are in the end 'just' a Hyper-V VM, they are a bit more complex than on-premise VMs. There are several Azure features, including storage accounts, images, endpoints, locations, and azure services to consider).

In azure, each VM runs within an Azure service. I like to think of the Azure service as the load balancer. In the simplest case of a single VM, you have a VM and the service and use the name and IP of the service to access the VM. The Service also allows you to provide more instances of the VM in an lbfo fashion. But today, I just want to create a simple VM. The VM I am creating runs inside a new Azure service of the same name as the VM

To create an Azure VM, you have two main options: use and Azure VM Image as the starting point for your VM, or create the VM on premise, and ship it's VHD up to azure and then create a VM using that uploaded VHD. In this blog post, I'll concentrate on the first, leaving the second for another day.

A VM image is a sysprepped operating system image often with additions and customisations. The image was built from some base OS with some changes possibly made. You can get simple base OS images – in effect what would be on the product DVD. Others have been customised, some heavily. Azure images come from a host of places, including Microsoft and the family. Once the reference Image is created, the creator then prepares.

You can easily create a VM from an existing VM image. To see the VM images in Azure, you simply use Get-AzureVMImage. As of writing there are 438 images. Of these, 187 are Linux based and 251 Windows based. A given image is in one and typically all Azure locations. An image belongs to an image family and has a unique image name. With 125 image families to chose from finding your image (and it's specific image name is based on Get-AzureVmImage and piping the output to your normal PowerShell toolset!

One suggestion if you are experimenting. Doing a Get-AzureVMImage call takes a while as you are going out to the internet (in my case behind a slow adsl line) is to save the images to a variable, then pipe the variable to where/group/sort/select thus avoiding the round trip up to the azure data center.

Today, I just want to create an image of Windows Server 2012 R2 Datacenter. So to find the image I do this:

image 

Next, in creating the Azure VM, you need an Azure Storage Storage Account in which you are going to store the VHD for your VM. The VHD starts off as, in effect, a copy of the image (i.e. a syspreped version of Windows) stored in Azure Storage VM. You can pre-create the storage account, although in this case, I let the cmdlets build the storage account for me.

So here's the script:

# Create-Azurevm.ps1
# This script creates an azure VM
# Set Values
# VM Label we are looking for
$label          = 'Windows Server 2012 R2 Datacenter, December 2014'
#vm and vmservice names
$vmname         = 'psh1'
$vmservicename  = 'psh1'
# vm admin user and username
$vmusername     = 'tfl'
$vmpassword     = '~+aQ8$3£-4'
# instance size and location
$vminstancesize = 'Basic_A3'
$vmlocation     = 'West Europe'

# Next, create a credential for the VM
$Username = "$vmname\$vmusername"
$Password = ConvertTo-SecureString $vmpassword -AsPlainText -Force
$VMcred = new-object system.management.automation.PSCredential $username,$Password

# Get all Azure VM images
$imgs = get-AzureVMImage 

# Then get the latest image's image name
$img = $imgs| where label -eq $label
$imgnm = $img.imagename

# OK - do it and create the VM

New-AzureVMConfig -Name $vmname -Instance $vminstancesize -Image $imgnm `
| Add-AzureProvisioningConfig -Windows -AdminUser'tfl' -Pass $vmpassword `
| New-AzureVM -ServiceName 'psh1' -Location $vmlocation

Once the VM is created, you can then start and use it. Having said that, there are some pre-requisites, like setting up end points and enabling remote management inside the VM. I'll cover these topics in later blog posts.

del.icio.us Tags: ,

MVA Training For Azure

I see that Microsoft are continuing to post more great Azure training into the Microsoft Virtual Academy. If you look here, you will find some great videos that can ultimately prepare you for 70-533 (Implementing Microsoft Azure Infrastructure Solutions)

There are several great things about this training – first it's pretty current. Azure changes literally weekly which means some of the printed material from other training outlets can be woefully out of date!. Second, it's free. And finally, being simple videos, it's easy to watch in your spare time (Ed: what is that?).

del.icio.us Tags: ,,,

Monday, January 19, 2015

Azure VMs – Improving the Display XML

The cmdlets that come with Azure (all 508 of them) also come with a little bit of display XML. But as I play with the cmdlets, I find either the display XML non-existent or less than helpful. To that end, I've started developing some supplemental .format.ps1xml.

Today, I was playing with the output of Get-AzureVM. By default, the output looks like this:

image

I have taken the liberty of creating some new display XM, and with that output, Get-AzureVM now produces better looking (IMHO) output.

image As you can see, there is some more information (including IP address, FQDN) that is useful when you are troubleshooting.

To get the display XML, take a look at http://pshscripts.blogspot.com/2015/01/azurevmformatps1xml.html and there you can grab the screen, save it in a local file, then use Update-FormatData to load it.

It's kind of cool to be able to improve released cmdlet sets! I wonder if anyone from the Azure team reads this. If so, you guys are welcome to the  XML!

Friday, January 16, 2015

Fun with the PowerShell Pipeline

I've been having some fun with the PowerShell Pipeline. This started with a post to a PowerShell question over on Spiceworks (http://community.spiceworks.com/topic/737915-powershell-question-help). The OP wanted to stop the spooler, remove any temp files then restart. Simple enough – but the first response suggested using pipes like this:

Stop-Service -Name Spooler |
Remove-Item C:\Windows\System32\spool\PRINTERS\* |
Start-Service -Name Spooler

I looked at this for a long while wondering what the heck the pipeline would be doing here. More importantly, I could not believe it would work – surely the output of Stop-Service would be null thus the remove-item would fail, etc. Then it tried it and much to my surprise – it actually works!

Then I began to wonder two things – WHY does it work at all – and what is the performance impact. To understand why requires a good understanding of how the pipeline works. In this case, Jason Shirk has a great explanation of the underpinnings over on Stack Overflow: http://stackoverflow.com/questions/22343187/why-is-an-empty-powershell-pipeline-not-the-same-as-null. I highly recommend reading it to help you better understand what the pipeline is doing.

Ok – so I know WHY it works, but what is the performance impact if any? To answer this question, I first constructed a little script.

Function Measureit {$m1 = (Measure-Command -Expression {
      Stop-Service -Name Spooler |
      Remove-Item C:\Windows\System32\spool\PRINTERS\* |
      Start-Service -Name Spooler
  }).milliseconds

  $m2 = (Measure-Command -Expression {
      Stop-Service -Name Spooler
      Remove-Item C:\Windows\System32\spool\PRINTERS\*
      Start-Service -Name Spooler
  }).Milliseconds

  "{0,-10}{1}" -f $m1, $m2
}

1..20 | %{measureit}

Now please read the script and guess which is going to be faster – with or without pipelines? The results absolutely astounded me:

15        307
2         273
2         270
254       280
2         277
10        267
2         271
2         269
2         276
2         268
2         268
2         277
2         272
2         268
2         268
2         268
10        267
2         269
2         267
2         271

I know why this works, but I sure as heck have NO idea how adding the pipeline, in this care can offer up 100-fold improvement in the speed of execution in most cases. Hmmm.

Thursday, January 15, 2015

Using Azure VMs for an Exchange DAG Witness Server

Last week, the Azure team released a cool new feature: support for a DAG Witness server inside an Azure VM. With the latest version of exchange, you can configure automatic data centre failover. But to do that, you new require three physical sites. However, according to Microsoft, many customers only had two physical sites deployed. This is where Azure comes in, since these clients can use Azure as their third physical site. This provides a cost-effective method for improving the overall availability and resiliency of their Exchange deployment, and requires no up front capital expenditure.

OF course, deployment of production Exchange servers is still unsupported on Azure virtual machines. But I can't help thinking that in due course we'll see this restriction changed.  MS kind of hint that by saying; "Stay tuned for future announcements about additional support for Azure deployment scenarios.".

Yet another cool scenario available to Azure customers today . For more details on how to do it, see the TechNet article at: http://technet.microsoft.com/en-us/library/dn903504(v=exchg.150).aspx.

Thursday, January 08, 2015

Azure Storage Architecture

I've been doing a lot of work, recently, around Azure – and one interesting aspect is Azure Storage. Azure Storage enables you to store and retrieve large amounts of unstructured data, such as documents and media files with Azure Blobs; structured nosql based data with Azure Tables; reliable messages with Azure Queues and use SMB-based Azure Files that can help to migrate on-premises applications to the cloud. The first three of these feature have been offered by Azure for several years, while Azure Files is newer. Azure files is still in preview – and you can learn a bit more about the File Service from the Azure Strorage team's team blog.

As an aside, the Azure module you can download enables you to manage Azure Storage features from PowerShell. The cmdlet set is rich (and richer with Azure Files) with 48 storage related cmdlets in the latest version of the Azure Cmdlets (0.8.12). These cmdlets are fairly easy to use, but I've found the error handling is a bit harder than with equivalent on-premise cmdlets. Many errors, for instance, return little useful information to help you troubleshoot. There is room for improvement – and knowing the Azure team, will come as fast as they can get to it!

In order both to scale and to provide great recovery from failure, the architecture of Azure storage is a bit more complex than the NTFS file system we are used to. Although it's now a couple of years old, the Azure team also published a detailed description of the Azure Storage Architecture. It's true level 400 stuff – and you can find it here. The paper does not cover the Azure File Service – hopefully Microsoft will provide an update to this great white paper at some point in the future.

If you are planning on architecting solutions utilising Azure Storage, this paper is one you should read and absorb. And, in due course, apply when you implement your solution(s)!

del.icio.us Tags: ,

Thursday, January 01, 2015

Happy New Year

Just a quick post to wish all the readers of this blog a very happy and joyous new year. I hope 2015 turns out to be a great year. For me, this year sees the start of my retirement, although the way things look, I'll be busier than ever. My best to you and yours.

Tuesday, December 16, 2014

Sometimes When You Ask – You Get! Thanks Azure!

Over the past few months, I've been running a set of 1-day sessions for Microsoft SMB partners – these sessions are a combination of lecture and lab work. This material shows them how to position key Azure and Windows Server 2012 R2 features and gives them experience in using the products.

In a recent session, I was asked about Azure Backup and client systems. At that point (early September this year), the answer was no – Azure Backup did not support Windows 8 (or any client problem). The scenario here is the road warrior who never got back to base, but was forever somewhere in the cloud (via whatever networking they may find in their travels!). They just want their data backed up so if the laptop dies, is stolen or the disk itself dies – they can recover their data once the replacement hardware is up and running fully. Seemed to me to be a service I might buy for myself!

This seemed to me to worthy of consideration. So I posted a request over on Feedback.Azure.com. A couple of days later, I got surprise email from the Azure  Backup PM who wanted to chat with me about the suggestion I'd posted. We then had a conference call and I was able to explain the user need, and the potential for the suggestion. He listened, asked great questions then hinted that this is something they could consider for a future release. I did not hold out much hope of features any time soon, but it was nice to be listened to and to maybe having an impact down the road.

Imagine my surprise and delight when I read today's post on the Azure Team Blog, which announced support for Windows Cliewnt OS's! From asking to delivery in 14 weeks. Nice job!

Sunday, December 07, 2014

Type Accelerators and TypePx

Last week, I updated some earlier scripts related to Type Accelerators over on my PshScripts blog – and I've had some great feedback. Kirk Munro (@Poshoholic on twitter and all around PowerShell superstar) pointed out there is a TypePX module he's published to GitHub.

As to be expected of anything Kirk touches, the TypePX module is rich and well implemented. His module is production ready – so you could easily copy it to your modules folder and start to use it. The TypePx functions are pipeline friendly and have good error handling. 

What I've published is much more simple and designed for a different audience. The whole idea of what I published, and indeed the whole point of my PshScripts blog, is to illustrate simple concepts, apis, etc in a simple script. In this case, the essence of type accelerators is wrapped up in a few lines of code. This is what I strive to publish in Pshscripts – showing you how to do one thning simply (per script!). What Kirk publishes is enterprise grade production scripting. I readily admit they are different – but I'd like to think there is room for both. One ofr learning with, one for using in production.