Home Cloud & Networking Microsoft Windows Azure Development Cookbook

Microsoft Windows Azure Development Cookbook

By Neil Mackenzie
books-svg-icon Book
eBook $32.99 $22.99
Print $54.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $32.99 $22.99
Print $54.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Controlling Access in the Windows Azure Platform
About this book
The Windows Azure platform is Microsoft's Platform-as-a-Service environment for hosting services and data in the cloud. It provides developers with on-demand computing, storage, and service connectivity capabilities that facilitate the hosting of highly scalable services in Windows Azure datacenters across the globe. This practical cookbook will show you advanced development techniques for building highly scalable cloud-based services using the Windows Azure platform. It contains over 80 practical, task-based, and immediately usable recipes covering a wide range of advanced development techniques for building highly scalable services to solve particular problems/scenarios when developing these services on the Windows Azure platform. Packed with reusable, real-world recipes, the book starts by explaining the various access control mechanisms used in the Windows Azure platform. Next you will see the advanced features of Windows Azure Blob storage, Windows Azure Table storage, and Windows Azure Queues. The book then dives deep into topics such as developing Windows Azure hosted services, using Windows Azure Diagnostics, managing hosted services with the Service Management API, using SQL Azure and the Windows Azure AppFabric Service Bus. You will see how to use several of the latest features such as VM roles, Windows Azure Connect, startup tasks, and the Windows Azure AppFabric Caching Service.
Publication date:
August 2011
Publisher
Packt
Pages
392
ISBN
9781849682220

 

Chapter 1. Controlling Access in the Windows Azure Platform

In this chapter, we will cover:

  • Managing Windows Azure Storage Service access keys

  • Connecting to the Windows Azure Storage Service

  • Using SetConfigurationSettingPublisher()

  • Connecting to the storage emulator

  • Managing access control for containers and blobs

  • Creating a Shared Access Signature for a container or blob

  • Using a container-level access policy

  • Authenticating against the Windows Azure Service Management REST API

  • Authenticating with the Windows Azure AppFabric Caching Service

 

Introduction


The various components of the Windows Azure Platform are exposed using Internet protocols. Consequently, they need to support authentication so that access to them can be controlled.

The Windows Azure Storage Service manages the storage of blobs, queues, and tables. It is essential that this data be kept secure, so that there is no unauthorized access to it. Each storage account has an account name and an access key which are used to authenticate access to the storage service. The management of these access keys is important. The storage service provides two access keys for each storage account, so that the access key not being used can be regenerated. We see how to do this in the Managing Windows Azure Storage Service access keys recipe.

The storage service supports hash-based message authentication (HMAC), in which a storage operation request is hashed with the access key. On receiving the request, the storage service validates it and either accepts or denies it. The Windows Azure Storage Client library provides several classes that support various ways of creating an HMAC, and which hide the complexity of creating and using one. We see how to use them in the Connecting to the Windows Azure Storage Service recipe. The SetConfigurationSettingPublisher() method has caused some programmer grief, so we look at it in the Using SetConfigurationSettingPublisher() recipe.

The Windows Azure SDK provides a compute emulator and a storage emulator. The latter uses a hard-coded account name and access key. We see the support provided for this in the Connecting to the storage emulator recipe.

Blobs are ideal for storing static content for web roles, so the storage service provides several authentication methods for access to containers and blobs. Indeed, a container can be configured to allow anonymous access to the blobs in it. Blobs in such a container can be downloaded without any authentication. We see how to configure this in the Managing access control for containers and blobs recipe.

There is a need to provide an intermediate level of authentication for containers and blobs, a level that lies between full authentication and anonymous access. The storage service supports the concept of a shared access signature, which is a pre-calculated authentication token and can be shared in a controlled manner allowing the bearer to access a specific container or blob for up to one hour. We see how to do this in the Creating a shared access signature for a container or blob recipe.

A shared access policy combines access rights with a time for which they are valid. A container-level access policy is a shared access policy that is associated by name with a container. A best practice is to derive a shared access signature from a container-level access policy. Doing this provides greater control over the shared access signature as it becomes possible to revoke it. We see how do this in the Using a container-level access policy recipe.

There is more to the Windows Azure Platform than storage. The Windows Azure Service Management REST API is a RESTful API that provides programmatic access to most of the functionality available on the Windows Azure Portal. This API uses X.509 certificates for authentication. Prior to use, the certificate must be uploaded, as a management certificate, to the Windows Azure Portal. The certificate must then be added as a certificate to each request made against the Service Management API. We see how to do this in the Authenticating against the Windows Azure Service Management REST API recipe.

The Windows Azure AppFabric services use a different authentication scheme, based on a service namespace and authentication token. In practice, these are similar to the account name and access key used to authenticate against the storage service, although the implementation is different. The Windows Azure AppFabric services use the Windows Azure Access Control Service (ACS) to perform authentication. However, this is abstracted away in the various SDKs provided for the services. We see how to authenticate to one of these services in the Authenticating with the Windows Azure AppFabric Caching Service recipe.

 

Managing Windows Azure Storage Service access keys


The data stored by the Windows Azure Storage Service must be secured against unauthorized access. To ensure that security, all storage operations against the table service and the queue service must be authenticated. Similarly, other than inquiry requests against public containers and blobs, all operations against the blob service must also be authenticated. The blob service supports public containers so that, for example, blobs containing images can be downloaded directly into a web page.

Each storage account has a primary access key and a secondary access key that can be used to authenticate operations against the storage service. When creating a request against the storage service, one of the keys is used along with various request headers to generate a 256-bit, hash-based message authentication code (HMAC). This HMAC is added as an Authorization request header to the request. On receiving the request, the storage service recalculates the HMAC and rejects the request if the received and calculated HMAC values differ. The Windows Azure Storage Client library provides methods that manage the creation of the HMAC and attaching it to the storage operation request.

There is no distinction between the primary and secondary access keys. The purpose of the secondary access key is to enable continued use of the storage service while the other access key is being regenerated. While the primary access key is used for authentication against the storage service, the secondary access key can be regenerated without affecting the service—and vice versa. This can be extremely useful in situations where storage access credentials must be rotated regularly.

As possession of the storage account name and access key is sufficient to provide full control over the data managed by the storage account, it is essential that the access keys be kept secure. In particular, access keys should never be downloaded to a client, such as a Smartphone, as that exposes them to potential abuse.

In this recipe, we will learn how to use the primary and secondary access keys.

Getting ready

This recipe requires a deployed Windows Azure hosted service that uses a Windows Azure storage account.

How to do it...

We are going to regenerate the secondary access key for a storage account and configure a hosted service to use it. We do this as follows:

  1. Go to the Windows Azure Portal.

  2. In the Storage Accounts section, regenerate the secondary access key for the desired storage account.

  3. In the Hosted Services section, configure the desired hosted service and replace the value of AccountKey in the DataConnectionString setting with the newly generated secondary access key.

How it works...

In step 2, we can choose which access key to regenerate. It is important that we never regenerate the access key currently being used since doing so immediately renders the storage account inaccessible. Consequently, we regenerate only the secondary access key if the primary access key is currently in use—and vice versa.

In step 3, we upgrade the service configuration to use the access key we just generated. This change can be trapped and handled by the hosted service. However, it should not require the hosted service to be recycled. We see how to handle configuration changes in the Handling changes to the configuration and topology of a hosted service recipe in Chapter 5.

 

Connecting to the Windows Azure Storage Service


In a Windows Azure hosted service, the storage account name and access key are stored in the service configuration file. By convention, the account name and access key for data access are provided in a setting named DataConnectionString. The account name and access key needed for Windows Azure diagnostics must be provided in a setting named Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString.

Note

The DataConnectionString setting must be declared in the ConfigurationSettings section of the service definition file. However, unlike other settings, the connection string setting for Windows Azure diagnostics is implicitly defined when the diagnostics module is specified in the Imports section of the service definition file. Consequently, it must not be specified in the ConfigurationSettings section.

A best practice is to use different storage accounts for application data and diagnostic data. This reduces the possibility of application data access being throttled by competition for concurrent writes from the diagnostics monitor. It also provides a security boundary between application data and diagnostics data, as diagnostics data may be accessed by individuals who should have no access to application data.

In the Windows Azure Storage Client library, access to the storage service is through one of the client classes. There is one client class for each of Blob service, Queue service, and Table service—CloudBlobClient, CloudQueueClient, and CloudTableClient respectively. Instances of these classes store the pertinent endpoint, as well as the account name and access key.

The CloudBlobClient class provides methods to access containers list their contents and get references to containers and blobs. The CloudQueueClient class provides methods to list queues and to get a reference to the CloudQueue instance used as an entry point to the Queue service functionality. The CloudTableClient class provides methods to manage tables and to get the TableServiceContext instance used to access the WCF Data Services functionality used in accessing the Table service. Note that CloudBlobClient, CloudQueueClient, and CloudTableClient instances are not thread safe so distinct instances should be used when accessing these services concurrently.

The client classes must be initialized with the account name and access key, as well as the appropriate storage service endpoint. The Microsoft.WindowsAzure namespace has several helper classes. The StorageCredentialsAccountAndKey class initializes a StorageCredential instance from an account name and access key while the StorageCredentialsSharedAccessSignature class initializes a StorageCredential instance from a shared access signature. The CloudStorageAccount class provides methods to initialize an encapsulated StorageCredential instance directly from the service configuration file.

In this recipe, we will learn how to use CloudBlobClient, CloudQueueClient, and CloudTableClient instances to connect to the storage service.

Getting ready

This recipe assumes the application configuration file contains the following:

<appSettings>
   <add key="DataConnectionString"value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
   <add key="AccountName" value="{ACCOUNT_NAME}"/>
   <add key="AccountKey" value="{ACCOUNT_KEY}"/>
</appSettings>

Note

Downloading the example code for this book

You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you.

We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the storage account name and access key.

How to do it...

We are going to connect to the Table service, the Blob service, and the Queue service to perform a simple operation on each. We do this as follows:

  1. Add a new class named ConnectingToStorageExample to the project.

  2. Add the following using statements to the top of the class file:

    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.StorageClient;
    using System.Configuration;
  3. Add the following method, connecting to the blob service, to the class:

    private static void UseCloudStorageAccountExtensions()
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
    
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       CloudBlobContainer cloudBlobContainer =cloudBlobClient.GetContainerReference("{CONTAINER_NAME}");
    
       cloudBlobContainer.Create();
    }
  4. Add the following method, connecting to the Table service, to the class:

    private static void UseCredentials()
    {
       String accountName = ConfigurationManager.AppSettings["AccountName"];
       String accountKey = ConfigurationManager.AppSettings["AccountKey"];
       StorageCredentialsAccountAndKey storageCredentials =new StorageCredentialsAccountAndKey(accountName, accountKey);
    
       CloudStorageAccount cloudStorageAccount =new CloudStorageAccount(storageCredentials, true);
       CloudTableClient tableClient =new CloudTableClient(cloudStorageAccount.TableEndpoint.AbsoluteUri,storageCredentials);
    
       Boolean tableExists =tableClient.DoesTableExist("{TABLE_NAME}");
    }
  5. Add the following method, connecting to the Queue service, to the class:

    private static void UseCredentialsWithUri()
    {
       String accountName = ConfigurationManager.AppSettings["AccountName"];
       String accountKey = ConfigurationManager.AppSettings["AccountKey"];
       StorageCredentialsAccountAndKey storageCredentials =new StorageCredentialsAccountAndKey(accountName, accountKey);
    
       String baseUri =String.Format("https://{0}.queue.core.windows.net/",accountName);
       CloudQueueClient cloudQueueClient =new CloudQueueClient(baseUri, storageCredentials);
       CloudQueue cloudQueue =cloudQueueClient.GetQueueReference("{QUEUE_NAME}");
    
       Boolean queueExists = cloudQueue.Exists();
    }
  6. Add the following method, using the other methods, to the class:

    public static void UseConnectionToStorageExample()
    {
       UseCloudStorageAccountExtensions();
       UseCredentials();
       UseCredentialsWithUri();
    }

How it works...

In steps 1 and 2, we set up the class.

In step 3, we implement the standard way to access the storage service using the Storage Client library. We use the static CloudStorageAccount.Parse() method to create a CloudStorageAccount instance from the value of the connection string stored in the configuration file. We then use this instance with the CreateCloudBlobClient() extension method to the CloudStorageAccount class to get the CloudBlobClient instance we use to connect to the Blob service. We can also use this technique with the Table service and the Queue service using the relevant extension methods for them: CreateCloudTableClient() and CreateCloudQueueClient() respectively. We complete this example by using the CloudBlobClient instance to get a CloudBlobContainer reference to a container and then fetch its attributes. We need to replace {CONTAINER_NAME} with the name for a container.

In step 4, we create a StorageCredentialsAccountAndKey instance directly from the account name and access key. We then use this to construct a CloudStorageAccount instance, specifying that any connection should use HTTPS. Using this technique, we need to provide the Table service endpoint explicitly when creating the CloudTableClient instance. We then use this to verify the existence of a table. We need to replace {TABLE_NAME} with the name for a table. We can use the same technique with the Blob service and Queue service by using the relevant CloudBlobClient or CloudQueueClient constructor.

In step 5, we use a similar technique except that we avoid the intermediate step of using a CloudStorageAccount instance and explicitly provide the endpoint for the Queue service. We use the CloudQueueClient instance created in this step to verify the existence of a queue. We need to replace {QUEUE_NAME} with the name of a queue. Note that we have hard-coded the endpoint for the Queue service.

In step 6, we add a method that invokes the methods added in the earlier steps.

 

Using SetConfigurationSettingPublisher()


The CloudStorageAccount class in the Windows Azure Storage Client library encapsulates a StorageCredential instance that can be used to authenticate against the Windows Azure Storage Service. It also exposes a FromConfigurationSetting() factory method that creates a CloudStorageAccount instance from a configuration setting.

This method has caused much confusion since, without additional configuration, it throws an InvalidOperationException with a message of "SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting() can be used." Consequently, before using FromConfigurationSetting(), it is necessary to invoke SetConfigurationSettingPublisher() once. The intent of this method is that it can be used to specify alternate ways of retrieving the data connection string that FromConfigurationSetting() uses to initialize the CloudStorageAccount instance. This setting is process-wide, so is typically done in the OnStart() method of the RoleEntryPoint class for the role.

The following is a simple implementation for SetConfigurationSettingPublisher():

CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
   configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
});

There are several levels of indirection here, but the central feature is the use of a method that takes a String parameter specifying the name of the configuration setting and returns the value of that setting. In the example here, the method used is RoleEnvironment.GetConfigurationSettingValue(). The configuration-setting publisher can be set to retrieve configuration settings from any location including app.config or web.config.

The use of SetConfigurationSettingPublisher() is no longer encouraged. Instead, it is better to use CloudStorageAccount.Parse(), which takes a data connection string in canonical form and creates a CloudStorageAccount instance from it. We see how to do this in the Connecting to the Windows Azure Storage Service recipe.

In this recipe, we will learn how to set and use a configuration-setting publisher to retrieve the data connection string from a configuration file.

How to do it...

We are going to add an implementation for SetConfigurationSettingPublisher() to a worker role. We do this as follows:

  1. Create a new cloud project.

  2. Add a worker role to the project.

  3. Add the following to the WorkerRole section of the ServiceDefinition.csdef file:

    <ConfigurationSettings>
       <Setting name="DataConnectionString" />
    </ConfigurationSettings>
  4. Add the following to the ConfigurationSettings section of the ServiceConfiguration.cscfg file:

    <Setting name="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
  5. Replace WorkerRole.Run() with the following:

    public override void Run()
    {
       UseFromConfigurationSetting("{CONTAINER_NAME}");
    
       while (true)
       {
          Thread.Sleep(10000);
          Trace.WriteLine("Working", "Information");
       }
    }
  6. Replace WorkerRole.OnStart() with the following:

    public override bool OnStart()
    {
       ServicePointManager.DefaultConnectionLimit = 12;
    
       CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
       {
          configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
       });
    
       return base.OnStart();
    }
  7. Add the following method, implicitly using the configuration setting publisher, to the WorkerRole class:

    private void UseFromConfigurationSetting(String containerName)
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
    
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       CloudBlobContainer cloudBlobContainer =cloudBlobClient.GetContainerReference(containerName);
    
       cloudBlobContainer.Create();
    }

How it works...

In steps 1 and 2, we set up the project.

In steps 3 and 4, we define and provide a value for the DataConnectionString setting in the service definition and service configuration files. We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the account name and access key.

In step 5, we modify the Run() method to invoke a method that accesses the storage service. We must provide an appropriate value for {CONTAINER_NAME}.

In step 6, we modify the OnStart() method to set a configuration setting publisher for the role instance. We set it to retrieve configuration settings from the service configuration file.

In step 7, we invoke CloudStorageAccount.FromConfigurationSetting(), which uses the configuration setting publisher we added in step 6. We then use the CloudStorageAccount instance to create CloudBlobClient and CloudBlobContainer instances that we use to create a new container in blob storage.

 

Connecting to the storage emulator


The Windows Azure SDK provides a compute emulator and a storage emulator that work in a development environment to provide a local emulation of Windows Azure hosted services and storage services. There are some differences in functionality between storage services and the storage emulator. Prior to Windows Azure SDK v1.3, the storage emulator was named development storage.

Note

By default, the storage emulator uses SQL Server Express, but it can be configured to use SQL Server.

An immediate difference is that the storage emulator supports only one account name and access key. The account name is hard-coded to be devstoreaccount1. The access key is hard-coded to be:

Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

Another difference is that the storage endpoints are constructed differently for the storage emulator. The storage service uses URL subdomains to distinguish the endpoints for the various types of storage. For example, the endpoint for the Blob service for a storage account named myaccount is:

myaccount.blob.core.windows.net

The endpoints for the other storage types are constructed similarly by replacing the word blob with either table or queue.

This differentiation by subdomain name is not used in the storage emulator which is hosted on the local host at 127.0.0.1. Instead, the storage emulator distinguishes the endpoints for various types of storage through use of different ports. Furthermore, the account name, rather than being part of the subdomain, is provided as part of the URL. Consequently, the endpoints used by the storage emulator are as follows:

  • 127.0.0.1:10000/devstoreaccount1 Blob

  • 127.0.0.1:10001/devstoreaccount1 Queue

  • 127.0.0.1:10002/devstoreaccount1 Table

The Windows Azure Storage Client library hides much of this complexity but an understanding of it remains important in case something goes wrong. The account name and access key are hard-coded into the Storage Client library, which also provides simple access to an appropriately constructed CloudStorageAccount object.

The Storage Client library also supports a special value for the DataConnectionString in the service configuration file. Instead of specifying the account name and access key, it is sufficient to specify the following:

UseDevelopmentStorage=true

For example, this is specified as follows in the service configuration file:

<Setting name="DataConnectionString" value="UseDevelopmentStorage=true" />

This value can also be used for the Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString data connection string required for Windows Azure Diagnostics.

The CloudStorageAccount.Parse() and CloudStorageAccount.FromConnectionString() methods handle this value in a special way to create a CloudStorageAccount object that can be used to authenticate against the storage emulator.

In this recipe, we will learn how to connect to the storage emulator.

Getting ready

This recipe assumes the following is in the application configuration file:

<appSettings>
   <add key="DataConnectionString"value="UseDevelopmentStorage=true"/>
</appSettings>

How to do it...

We are going to connect to the storage emulator in various ways and perform some operations on blobs. We do this as follows:

  1. Add a class named StorageEmulatorExample to the project.

  2. Add the following using statements to the top of the class file:

    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.StorageClient;
    using System.Configuration;
  3. Add the following private members to the class:

    private String containerName;
    private String blobName;
  4. Add the following constructor to the class:

    StorageEmulatorExample(String containerName, String blobName)
    {
    this.containerName = containerName;
    this.blobName = blobName;
    }
  5. Add the following method, using the configuration file, to the class:

    private void UseConfigurationFile()
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(
         ConfigurationManager.AppSettings["DataConnectionString"]);
    
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       CloudBlobContainer cloudBlobContainer =cloudBlobClient.GetContainerReference(containerName);
       cloudBlobContainer.Create();
    }
  6. Add the following method, using an explicit storage account, to the class:

    private void CreateStorageCredential()
    {
       String baseAddress ="http://127.0.0.1:10000/devstoreaccount1";
       String accountName = "devstoreaccount1";
       String accountKey ="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==";
       StorageCredentialsAccountAndKey storageCredentials = newStorageCredentialsAccountAndKey(accountName, accountKey);
    
       CloudBlobClient cloudBlobClient =new CloudBlobClient(baseAddress, storageCredentials);
       CloudBlobContainer cloudBlobContainer =cloudBlobClient.GetContainerReference(containerName);
       CloudBlockBlob cloudBlockBlob =cloudBlobContainer.GetBlockBlobReference(blobName);
    
       cloudBlockBlob.UploadText("If we shadows have offended.");   cloudBlockBlob.Metadata["{METADATA_KEY}"] ="{METADATA_VALUE}";
       cloudBlockBlob.SetMetadata();
    }
    
  7. Add the following method, using the CloudStorageAccount.DevelopmentStorageAccount property, to the class:

    private void UseDevelopmentStorageAccount()
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.DevelopmentStorageAccount;
    
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       CloudBlobContainer cloudBlobContainer =cloudBlobClient.GetContainerReference(containerName);
       CloudBlockBlob cloudBlockBlob =cloudBlobContainer.GetBlockBlobReference(blobName);
    
       cloudBlockBlob.FetchAttributes();
       BlobAttributes blobAttributes = cloudBlockBlob.Attributes;
       String metadata =blobAttributes.Metadata["{METADATA_KEY}"];
    }
  8. Add the following method, using the methods added earlier, to the class:

    public static void UseStorageEmulatorExample()
    {
       String containerName = "{CONTAINER_NAME}";
       String blobName = "{BLOB_NAME}";
    
       StorageEmulatorExample example =new StorageEmulatorExample(containerName, blobName);
    
       example.UseConfigurationFile();
       example.CreateStorageCredential();
       example.UseDevelopmentStorageAccount();
    }
    

How it works...

In steps 1 and 2, we set up the class. In step 3, we add some private members for the container name and blob name, which we initialize in the constructor we add in step 4.

In step 5, we retrieve a data connection string from a configuration file and pass it into CloudStorageAccount.Parse() to create a CloudStorageAccount instance. We use this to get references to a CloudBlobContainer instance for the specified container. We use this to create the container.

In step 6, we create a StorageCredentialsAccountAndKey instance from the explicitly provided storage emulator values for account name and access key. We use the resulting StorageCredential to initialize a CloudStorageClient object, explicitly providing the storage emulator endpoint for blob storage. We then create a reference to a blob, upload some text to the blob, define some metadata for the blob, and finally update the blob with it. We must replace {METADATA_KEY} AND {METADATA_VALUE} with actual values.

In step 7, we initialize a CloudStorageAccount from the hard-coded CloudStorageAccount property exposed by the class. We then get a CloudBlockBlob object, which we use to retrieve the properties stored on the blob and retrieve the metadata we added in step 6. We should replace {METADATA_KEY} with the same value that we used in step 6.

In step 8, we add a helper method that invokes each of the methods we added earlier. We must replace {CONTAINER_NAME} and {BLOB_NAME} with appropriate names for the container and blob.

There's more...

Fiddler is a program that captures HTTP traffic, which makes it very useful for diagnosing problems when using the Windows Azure Storage Service. Its use is completely transparent when cloud storage is being used. However, the data connection string must be modified if you want Fiddler to be able to monitor the traffic. The data connection string must be changed to the following:

UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://ipv4.fiddler

Fiddler can be downloaded from the following URL:

http://www.fiddler2.com/fiddler2/

 

Managing access control for containers and blobs


The Windows Azure Storage Service authenticates all requests against the Table service and Queue service. However, the storage service allows the possibility of unauthenticated access against the Blob service. The reason is that blobs provide an ideal location for storing large static content for a website. For example, the images in a photo-sharing site could be stored as blobs and downloaded directly from the Blob service without being transferred through a web role.

Public access control for the Blob service is managed at the container level. The Blob service supports the following three types of access control:

  • No public read access in which all access must be authenticated

  • Public read access which allows blobs in a container to be readable without authentication

  • Full public read access in which authentication is not required to read the container data and the blobs contained in it

No public read access is the same access control as for the Queue service and Table service. The other two access control types both allow anonymous access to a blob, so that, for example, the blob can be downloaded into a browser by providing its full URL.

In the Windows Azure Storage Client library, the BlobContainerPublicAccessType enumeration specifies the three types of public access control for a container. The BlobContainerPermissions class exposes two properties: PublicAccess specifying a member of the BlobContainerPublicAccessType enumeration and SharedAccessPolicies specifying a set of shared access policies. The SetPermissions() method of the CloudBlobContainer class is used to associate a BlobContainerPermissions instance with the container. The GetPermissions() method retrieves the access permissions for a container.

In this recipe, we will learn how to specify the level of public access control for containers and blobs managed by the Blob service.

Getting ready

This recipe assumes the following is in the application configuration file:

<appSettings>
   <add key="DataConnectionString"value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
</appSettings>

We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the account name and access key.

How to do it...

We are going to specify various levels of public access control for a container. We do this as follows:

  1. Add a new class named BlobContainerPublicAccessExample to the project.

  2. Add the following using statements to the top of the class file:

    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.StorageClient;
    using System.Configuration;
    using System.Net;
  3. Add the following method, setting public access control for a container, to the class:

    public static void CreateContainerAndSetPermission(String containerName, String blobName,BlobContainerPublicAccessType publicAccessType )
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings[
         "DataConnectionString"]);
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
    
       CloudBlobContainer blobContainer =new CloudBlobContainer(containerName, cloudBlobClient);
       blobContainer.Create();
    
       BlobContainerPermissions blobContainerPermissions =new BlobContainerPermissions()
       {
          PublicAccess = publicAccessType
       };
       blobContainer.SetPermissions(blobContainerPermissions);
    
       CloudBlockBlob blockBlob =blobContainer.GetBlockBlobReference(blobName);
       blockBlob.UploadText("Has been changed glorious summer");
    }
  4. Add the following method, retrieving a blob, to the class:

    public static void GetBlob(String containerName, String blobName)
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
       Uri blobUri = new Uri(cloudStorageAccount.BlobEndpoint +containerName + "/" + blobName);
    
       HttpWebRequest httpWebRequest =(HttpWebRequest)HttpWebRequest.Create(blobUri);
       httpWebRequest.Method = "GET";
       using (HttpWebResponse response =(HttpWebResponse)httpWebRequest.GetResponse())
       {
          String status = response.StatusDescription;
       }}
  5. Add the following method, using the method we just added, to the class:

    public static void UseBlobContainerPublicAccessExample()
    {
       CreateContainerAndSetPermission("container1", "blob1",BlobContainerPublicAccessType.Blob);
       
       CreateContainerAndSetPermission("container2", "blob2",BlobContainerPublicAccessType.Container);
    
       CreateContainerAndSetPermission("container3", "blob3",BlobContainerPublicAccessType.Off);
    }

How it works...

In steps 1 and 2, we set up the class.

In step 3, we add a method that creates a container and blob, and applies a public access policy to the container. We create a CloudBlobClient instance from the data connection string in the configuration file. We then create a new container using the name passed in to the method. Then, we create a BlobContainerPermissions instance with the BlobContainerPublicAccessType passed into the method and set the permissions on the container. Note that we must create the container before we set the permissions because SetPermissions() sets the permissions directly on the container. Finally, we create a blob in the container.

In step 4, we use an HttpWebRequest instance to retrieve the blob without providing any authentication. This request causes a 404 (Not Found) error when the request attempts to retrieve a blob that has not been configured for public access. Note that when constructing blobUri for the storage emulator, we must add a / into the path after cloudStorageAccount.BlobEndpoint because of a difference in the way the Storage Client library constructs the endpoint for the storage emulator and the storage service. For example, we need to use the following for the storage emulator:

Uri blobUri = new Uri(cloudStorageAccount.BlobEndpoint + "/" +containerName + "/" + blobName);

In step 5, we add a method that invokes the CreateContainerAndSetPermission() method once for each value of the BlobContainerPublicAccessType enumeration. We then invoke the method twice, to retrieve blobs from different container. The second invocation leads to a 404 error since container3 has not been configured for unauthenticated public access.

See also

  • The recipe named Creating a Shared Access Signature for a container or blob in this chapter

 

Creating a Shared Access Signature for a container or blob


The Windows Azure Blob Service supports fully authenticated requests, anonymous requests, and requests authenticated by a temporary access key referred to as a Shared Access Signature. The latter allows access to containers or blobs to be limited to only those in possession of the Shared Access Signature.

A Shared Access Signature is constructed from a combination of:

  • Resource (container or blob)

  • Access rights (read, write, delete, list)

  • Start time

  • Expiration time

These are combined into a string from which a 256-bit, hash-based message authentication code (HMAC) is generated. An access key for the storage account is used to seed the HMAC generation. This HMAC is referred to as a shared access signature. The process of generating a Shared Access Signature requires no interaction with the Blob service. A shared access signature is valid for up to one hour, which limits the allowable values for the start time and expiration time.

When using a Shared Access Signature to authenticate a request, it is submitted as one of the query string parameters. The other query parameters comprise the information from which the shared access signature was created. This allows the Blob service to create a Shared Access Signature, using the access key for the storage account, and compare it with the Shared Access Signature submitted with the request. A request is denied if it has an invalid Shared Access Signature.

An example of a storage request for a blob named theBlob in a container named chapter1 is:

GET /chapter1/theBlob

An example of the query string parameters is:

st=2011-03-22T05%3A49%3A09Z
&se=2011-03-22T06%3A39%3A09Z
&sr=b
&sp=r
&sig=GLqbiDwYweXW4y2NczDxmWDzrJCc89oFfgBMTieGPww%3D

The st parameter is the start time for the validity of the Shared Access Signature. The se parameter is the expiration time for the validity of the Shared Access Signature. The sr parameter specifies that the Shared Access Signature is for a blob. The sp parameter specifies that the Shared Access Signature authenticates for read access only. The sig parameter is the Shared Access Signature. A complete description of these parameters is available on MSDN at the following URL:

http://msdn.microsoft.com/en-us/library/ee395415.aspx

Once a Shared Access Signature has been created and transferred to a client, no further verification of the client is required. It is important, therefore, that the Shared Access Signature be created with the minimum period of validity and that its distribution be restricted as much as possible. It is not possible to revoke a Shared Access Signature created in this manner.

In this recipe, we will learn how to create and use a Shared Access Signature.

Getting ready

This recipe assumes the following is in the application configuration file:

<appSettings>
   <add key="DataConnectionString"value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
   <add key="AccountName" value="{ACCOUNT_NAME}"/>
   <add key="AccountKey" value="{ACCOUNT_KEY}"/>
</appSettings>

We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the account name and access key.

How to do it...

We are going to create and use Shared Access Signatures for a blob. We do this as follows:

  1. Add a class named SharedAccessSignaturesExample to the project.

  2. Add the following using statements to the top of the class file:

    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.StorageClient;
    using System.Configuration;
    using System.Net;
    using System.IO;
    using System.Security.Cryptography;
  3. Add the following private members to the class:

    private String blobEndpoint;
    private String accountName;
    private String accountKey;
  4. Add the following constructor to the class:

    SharedAccessSignaturesExample()
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
       blobEndpoint =cloudStorageAccount.BlobEndpoint.AbsoluteUri;
       accountName = cloudStorageAccount.Credentials.AccountName;
       
    
       StorageCredentialsAccountAndKey accountAndKey =cloudStorageAccount.Credentials asStorageCredentialsAccountAndKey;
       accountKey =accountAndKey.Credentials.ExportBase64EncodedKey();
    }
  5. Add the following method, creating a container and blob, to the class:

       private void CreateContainerAndBlob(String containerName,String blobName)
       {
         CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
         CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
         CloudBlobContainer cloudBlobContainer =new CloudBlobContainer(containerName, cloudBlobClient);
         cloudBlobContainer.Create();
         CloudBlockBlob cloudBlockBlob =cloudBlobContainer.GetBlockBlobReference(blobName);
         cloudBlockBlob.UploadText("This weak and idle theme.");
    }
  6. Add the following method, getting a Shared Access Signature, to the class:

    private String GetSharedAccessSignature(String containerName, String blobName)
    {
       SharedAccessPolicy sharedAccessPolicy =new SharedAccessPolicy()
       {
          Permissions = SharedAccessPermissions.Read,
          SharedAccessStartTime =DateTime.UtcNow.AddMinutes(-10d),
          SharedAccessExpiryTime =DateTime.UtcNow.AddMinutes(40d)
       };
    
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       CloudBlobContainer cloudBlobContainer =new CloudBlobContainer(containerName, cloudBlobClient);
       CloudBlockBlob cloudBlockBlob =cloudBlobContainer.GetBlockBlobReference(blobName);
    
       String sharedAccessSignature =cloudBlockBlob.GetSharedAccessSignature(sharedAccessPolicy);
    
       return sharedAccessSignature;
    }
  7. Add the following method, creating a Shared Access Signature, to the class:

    private String CreateSharedAccessSignature(String containerName, String blobName, String permissions)
    {
       String iso8061Format = "{0:yyyy-MM-ddTHH:mm:ssZ}";
       DateTime startTime = DateTime.UtcNow;
       DateTime expiryTime = startTime.AddHours(1d);
       String start = String.Format(iso8061Format, startTime);
       String expiry = String.Format(iso8061Format, expiryTime);
       String stringToSign =String.Format("{0}\n{1}\n{2}\n/{3}/{4}\n",permissions, start, expiry, accountName, containerName);
    
       String rawSignature = String.Empty;
       Byte[] keyBytes = Convert.FromBase64String(accountKey);
       using (HMACSHA256 hmacSha256 = new HMACSHA256(keyBytes))
       {
          Byte[] utf8EncodedStringToSign =System.Text.Encoding.UTF8.GetBytes(stringToSign);
          Byte[] signatureBytes =hmacSha256.ComputeHash(utf8EncodedStringToSign);
          rawSignature = Convert.ToBase64String(signatureBytes);
       }
    
       String sharedAccessSignature =String.Format("?st={0}&se={1}&sr=c&sp={2}&sig={3}",Uri.EscapeDataString(start),Uri.EscapeDataString(expiry),permissions,Uri.EscapeDataString(rawSignature));
    
       return sharedAccessSignature;
    }
  8. Add the following method, authenticating with a Shared Access Signature, to the class:

    private void AuthenticateWithSharedAccessSignature(String containerName, String blobName,String sharedAccessSignature)
    {
       StorageCredentialsSharedAccessSignature storageCredentials= new StorageCredentialsSharedAccessSignature(sharedAccessSignature);
       CloudBlobClient cloudBlobClient =new CloudBlobClient(blobEndpoint, storageCredentials);
    
       CloudBlobContainer cloudBlobContainer =new CloudBlobContainer(containerName, cloudBlobClient);
       CloudBlockBlob cloudBlockBlob =cloudBlobContainer.GetBlockBlobReference(blobName);
       String blobText = cloudBlockBlob.DownloadText();
    }
  9. Add the following method, using a Shared Access Signature, to the class:

    private void UseSharedAccessSignature(String containerName, String blobName,String sharedAccessSignature)
    {
       String requestMethod = "GET";
       String urlPath = String.Format("{0}{1}/{2}{3}",
         blobEndpoint, containerName, blobName,sharedAccessSignature);
       Uri uri = new Uri(urlPath);
       HttpWebRequest request =(HttpWebRequest)WebRequest.Create(uri);
       request.Method = requestMethod;
       using (HttpWebResponse response =(HttpWebResponse)request.GetResponse())
       {
          Stream dataStream = response.GetResponseStream();
          using (StreamReader reader =new StreamReader(dataStream))
          {
             String responseFromServer = reader.ReadToEnd();
          }
       }
    }
  10. Add the following method, using the methods added earlier, to the class:

    public static void UseSharedAccessSignaturesExample()
    {
       String containerName = "{CONTAINER_NAME}";
       String blobName = "{BLOB_NAME}";
    
       SharedAccessSignaturesExample example =new SharedAccessSignaturesExample();
    
       example.CreateContainerAndBlob(containerName, blobName);
    
       String sharedAccessSignature1 =example.GetSharedAccessSignature(containerName, blobName);
       example.AuthenticateWithSharedAccessSignature(containerName, blobName, sharedAccessSignature1);
    
       String sharedAccessSignature2 =example.CreateSharedAccessSignature(containerName, blobName, "rw");
       example.UseSharedAccessSignature(containerName, blobName, sharedAccessSignature2);
    }
    

How it works...

In steps 1 and 2, we set up the class. In step 3, we add some private members for the blob endpoint, as well as the account name and access key which we initialize in the constructor we add in step 4. In step 5, we create a container and upload a blob to it.

In step 6, we use the GetSharedAccessSignature() method of the CloudBlockBlob class to get a shared access signature based on a SharedAccessPolicy we pass into it. In this SharedAccessPolicy, we specify that we want read access on a blob from 10 minutes earlier to 40 minutes later than the current time. The fuzzing of the start time is to minimize any risk of the time on the local machine being too far out of sync with the time on the storage service. This approach is the easiest way to get a shared access signature.

In step 7, we construct a Shared Access Signature from first principles. This version does not use the Storage Client library. We generate a string to sign from the account name, desired permissions, start, and expiration time. We initialize an HMACSHA256 instance from the access key, and use this to generate an HMAC from the string to sign. We then create the remainder of the query string while ensuring that the data is correctly URL encoded.

In step 8, we use a shared access signature to initialize a StorageCredentialsSharedAccessSignature instance, which we use to create a CloudBlobClient instance. We use this to construct the CloudBlobContainer and CloudBlobClient instances we use to download the content of a blob.

In step 9, we use HttpWebRequest and HttpWebResponse objects to perform an anonymous download of the content of a blob. We construct the query string for the request using the Shared Access Signature and direct the request to the appropriate blob endpoint. Note that when constructing urlPath for the storage emulator, we must add a / between {0} and {1} because of a difference in the way the Storage Client library constructs the endpoint for the storage emulator and the storage service. For example, we need to use the following for the storage emulator:

String urlPath = String.Format("{0}/{1}/{2}{3}",blobEndpoint, containerName, blobName,sharedAccessSignature);

In step 10, we add a helper method that invokes all the methods we added earlier. We must replace {CONTAINER_NAME} and {BLOB_NAME} with appropriate names for the container and blob.

There's more...

In step 7, we could create a Shared Access Signature based on a container-level access policy by replacing the definition of stringToSign with the following:

String stringToSign = String.Format("\n\n\n/{0}/{1}\n{2}", accountName, containerName, policyId);

policyId specifies the name of a container-level access policy.

See also

  • The recipe named Using a container-level access policy in this chapter

 

Using a container-level access policy


A shared access policy comprises a set of permissions (read, write, delete, list) combined with start and expiration times for validity of the policy. There are no restrictions on the start and expiration times for a shared access policy. A container-level access policy is a shared access policy associated by name with a container. A maximum of five container-level access policies can be associated simultaneously with a container, but each must have a distinct name.

A container-level access policy improves the management of shared access signatures. There is no way to retract or otherwise disallow a standalone shared access signature once it has been created. However, a shared access signature created from a container-level access policy has validity dependent on the container-level access policy. The deletion of a container-level access policy causes the revocation of all shared access signatures derived from it and they can no longer be used to authenticate a storage request. As they can be revoked at any time, there are no time restrictions on the validity of a shared access signature derived from a container-level access policy.

The container-level access policies for a container are set and retrieved as a SharedAccessPolicies collection of named SharedAccessPolicy objects. The SetPermissions() and GetPermissions() methods of the CloudBlobContainer class set and retrieve container-level access policies. A container-level access policy can be removed by retrieving the current SharedAccessPolicies, removing the specified policy, and then setting the SharedAccessPolicies on the container again.

A shared access signature is derived from a container-level access policy by invoking the CloudBlobContainer.GetSharedAccessSignature()passing in the name of the container-level access policy and an empty SharedAccessPolicy instance. It is not possible to modify the validity of the shared-access signature by using a non-empty SharedAccessPolicy.

In this recipe, we will learn how to manage container-level access policies and use them to create shared access signatures.

Getting ready

This recipe assumes the following is in the application configuration file:

<appSettings>
   <add key="DataConnectionString"value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
</appSettings>

We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the account name and access key.

How to do it...

We are going to create, use, modify, revoke, and delete a container-level access policy. We do this as follows:

  1. Add a class named ContainerLevelAccessPolicyExample to the project.

  2. Add the following using statements to the top of the class file:

    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.StorageClient;
    using System.Configuration;
  3. Add the following private members to the class:

    private Uri blobEndpoint;
    private CloudBlobContainer cloudBlobContainer;
  4. Add the following constructor to the class:

    ContainerLevelAccessPolicyExample()
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
       blobEndpoint = cloudStorageAccount.BlobEndpoint;
    
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       cloudBlobContainer =new CloudBlobContainer(containerName, cloudBlobClient);
       cloudBlobContainer.Create();
    }
  5. Add the following method, creating a container-level access policy, to the class:

    private void AddContainerLevelAccessPolicy(String policyId)
    {
       DateTime startTime = DateTime.UtcNow;
       SharedAccessPolicy sharedAccessPolicy =new SharedAccessPolicy()
       {
          Permissions = SharedAccessPermissions.Read |SharedAccessPermissions.Write,
          SharedAccessStartTime = startTime,
          SharedAccessExpiryTime = startTime.AddDays(3d)
       };
    
       BlobContainerPermissions blobContainerPermissions =new BlobContainerPermissions();
       blobContainerPermissions.SharedAccessPolicies.Add(policyId, sharedAccessPolicy);
    
       blobContainer.SetPermissions(blobContainerPermissions);
    }
  6. Add the following method, getting a shared access signature using the container-level access policy, to the class:

    private String GetSharedAccessSignature(String policyId)
    {
       SharedAccessPolicy sharedAccessPolicy =new SharedAccessPolicy();
       String sharedAccessSignature =cloudBlobContainer.GetSharedAccessSignature(sharedAccessPolicy, policyId);
       
       return sharedAccessSignature;
    }
  7. Add the following method, modifying the container-level access policy, to the class:

    private void ModifyContainerLevelAccessPolicy(String policyId)
    {
       BlobContainerPermissions blobContainerPermissions =cloudBlobContainer.GetPermissions();
    
       DateTime sharedAccessExpiryTime =(DateTime)blobContainerPermissions.SharedAccessPolicies[policyId].SharedAccessExpiryTime;
       blobContainerPermissions.SharedAccessPolicies[policyId].SharedAccessExpiryTime =
           sharedAccessExpiryTime.AddDays(1d);
    
       blobContainer.SetPermissions(blobContainerPermissions);
    }
  8. Add the following method, revoking a container-level access policy, to the class:

    private void RevokeContainerLevelAccessPolicy(String policyId)
    {
       BlobContainerPermissions containerPermissions =cloudBlobContainer.GetPermissions();
       
       SharedAccessPolicy sharedAccessPolicy =containerPermissions.SharedAccessPolicies[policyId];
       containerPermissions.SharedAccessPolicies.Remove(policyId);
       containerPermissions.SharedAccessPolicies.Add(policyId + "1", sharedAccessPolicy);
       
       cloudBlobContainer.SetPermissions(containerPermissions);
    }
  9. Add the following method, deleting all container-level access policies, to the class:

    private void DeleteContainerLevelAccessPolicies()
    {
       BlobContainerPermissions blobContainerPermissions =new BlobContainerPermissions();
    
       blobContainer.SetPermissions(blobContainerPermissions);
    }
  10. Add the following method, using the methods added earlier, to the class:

    public static void UseContainerLevelAccessPolicyExample()
    {
       String containerName = "{CONTAINER_NAME}";String policyId = "{POLICY_NAME}";
    
       ContainerLevelAccessPolicyExample example =new ContainerLevelAccessPolicyExample(containerName);
    
       example.AddContainerLevelAccessPolicy(policyId);
       String sharedAccessSignature1 =example.GetSharedAccessSignature(policyId);
    
       example.ModifyContainerLevelAccessPolicy(policyId);
       String sharedAccessSignature2 =example.GetSharedAccessSignature(policyId);
       example.RevokeContainerLevelAccessPolicy(policyId);
       String sharedAccessSignature3 =example.GetSharedAccessSignature(policyId + "1");
    
       example.DeleteContainerLevelAccessPolicies();
    }

How it works...

In steps 1 and 2, we set up the class. In step 3, we add some private members which we initialize in the constructor we add in step 4. We also create a container in the constructor.

In step 5, we create a SharedAccessPolicy instance and add it to the SharedAccessPolicies property of a BlobContainerPermissions object. Finally, we pass this and a policy name into a SetPermissions() method of the CloudBlobContainer class to create a container-level access policy name for the container.

In step 6, we get a shared access signature for a container with a specified container-level access policy. We initialize a CloudStorageAccount from the application configuration and use this to get a CloudBlobContainer instance for a specified container. Finally, we pass the policy name into the CloudBlobContainer.GetSharedAccessSignature() method to get a shared access signature for the container.

In step 7, we again get a CloudBlobContainer instance for the container and invoke GetPermissions() on it to retrieve the shared access policies for the container. We then add one day to the expiration date for a specified container-level access policy. Finally, we invoke CloudBlobContainer.SetPermissions() to update the container-level access policies for the container.

In step 8, we revoke an existing container-level access policy and create a new container-level policy with the same SharedAccessPolicy and a new policy name. We again use GetPermissions() to retrieve the shared access policies for the container and then invoke the Remove() and Add() methods of the SharedAccessPolicies class to perform the revocation. Finally, we invoke CloudBlobContainer.SetPermissions() to update the container-level access policies for the container.

In step 9, we delete all container-level access policies for a container. We create a default BlobContainerPermissions instance and pass this into CloudBlobContainer.SetPermissions() to remove all the container-level access policies for the container.

In step 10, we add a helper method that invokes all the methods we added earlier. We need to replace {CONTAINER_NAME} and {POLICY_NAME} with the names of a container and a policy for it.

 

Authenticating against the Windows Azure Service Management REST API


The Windows Azure Portal provides a user interface for managing Windows Azure hosted services and storage accounts. The Windows Azure Service Management REST API provides a RESTful interface that allows programmatic control of hosted services and storage accounts. It supports most, but not all, of the functionality provided in the Windows Azure Portal.

The Service Management API uses an X.509 certificate for authentication. This certificate must be uploaded as a management certificate to the Windows Azure Portal. Unlike service certificates, management certificates are not deployed to role instances. Consequently, if the Service Management API is to be accessed from an instance of a role, it is necessary to upload the certificate twice: as a management certificate for authentication and as a server certificate that is deployed to the instance. The latter also requires appropriate configuration in the service definition and service configuration files.

A management certificate can be self-signed because it is used only for authentication. As Visual Studio uses the Service Management API to upload and deploy packages, it contains tooling supporting the creation of the certificate. This is an option on the Publish dialog. A management certificate can also be created using makecert as follows:

C:\Users\Administrator>makecert -r -pe -sky exchange
-a sha1 -len 2048 -ss my -n "CN=Azure Service Management"
AzureServiceManagement.cer

This creates an X.509 certificate and installs it in the Personal (My) branch of the Current User level of the certificate store. It also creates a file named AzureServiceManagement.cer containing the certificate in a form that can be uploaded as a management certificate to the Windows Azure Portal. The certificate is self-signed (-r), with an exportable private key (-pe), created with the SHA-1 hash algorithm (-a), a 2048-bit key (-len), and with a key type of exchange (-sky).

Once created, the certificate must be uploaded to the Management Certificates section of the Windows Azure Portal. This section is at the subscription level—not the service level—as the Service Management API has visibility across all hosted services and storage accounts under the subscription.

In this recipe, we will learn how to authenticate to the Windows Azure Service Management REST API.

How to do it...

We are going to authenticate against the Windows Azure Service Management REST API and retrieve the list of hosted services for the subscription. We do this as follows:

  1. Add a class named ServiceManagementExample to the project.

  2. Add the following using statements to the top of the class file:

    using System.Net;
    using System.IO;
    using System.Xml.Linq;
    using System.Security.Cryptography.X509Certificates;
  3. Add the following private members to the class:

    XNamespace ns = "http://schemas.microsoft.com/windowsazure";
    String apiVersion = "2011-02-25";
    String subscriptionId;
    String thumbprint;
  4. Add the following constructor to the class:

    ServiceManagementExample(String subscriptionId, String thumbprint)
    {
       this.subscriptionId = subscriptionId;
       this.thumbprint = thumbprint;
    }
  5. Add the following method, retrieving the client authentication certificate from the certificate store, to the class:

    private static X509Certificate2 GetX509Certificate2(String thumbprint)
    {
       X509Certificate2 x509Certificate2 = null;
       X509Store store =new X509Store("My", StoreLocation.CurrentUser);
       try
       {
          store.Open(OpenFlags.ReadOnly);
          X509Certificate2Collection x509Certificate2Collection =store.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false);
          x509Certificate2 = x509Certificate2Collection[0];
       }
       finally
       {
          store.Close();
       }
       return x509Certificate2;
    }
  6. Add the following method, creating a web request, to the project:

    private HttpWebRequest CreateHttpWebRequest(Uri uri, String httpWebRequestMethod)
    {
       X509Certificate2 x509Certificate2 =GetX509Certificate2(thumbprint);
    
       HttpWebRequest httpWebRequest =(HttpWebRequest)HttpWebRequest.Create(uri);
       httpWebRequest.Method = httpWebRequestMethod;
       httpWebRequest.Headers.Add("x-ms-version", apiVersion);
       httpWebRequest.ClientCertificates.Add(x509Certificate2);
       httpWebRequest.ContentType = "application/xml";
    
       return httpWebRequest;
    }
  7. Add the following method, invoking a Get Services operation on the Service Management API, to the class:

    private IEnumerable<String> GetHostedServices()
    {
       String responseFromServer = String.Empty;
       XElement xElement;
    
       String uriPath = String.Format("https://management.core.windows.net/{0}/services/hostedservices", subscriptionId);
       Uri uri = new Uri(uriPath);
    
       HttpWebRequest httpWebRequest =CreateHttpWebRequest(uri, "GET");
       using (HttpWebResponse response =(HttpWebResponse)httpWebRequest.GetResponse())
       {
          Stream responseStream = response.GetResponseStream();
          xElement = XElement.Load(responseStream);
       }
    
       IEnumerable<String> serviceNames =from s in xElement.Descendants(ns + "ServiceName")
         select s.Value;
    
       return serviceNames;
    }
  8. Add the following method, invoking the methods added earlier, to the class:

    public static void UseServiceManagementExample()
    {
       String subscriptionId = "{SUBSCRIPTION_ID}";
       String thumbprint = "{THUMBPRINT}";
    
       ServiceManagementExample example =new ServiceManagementExample(subscriptionId, thumbprint);
       IEnumerable<String> serviceNames =example.GetHostedServices();
       List<String> listServiceNames =serviceNames.ToList<String>();
    }
    

How it works...

In steps 1 and 2, we set up the class. In step 3, we add some private members including the subscriptionId and thumbprint that we initialize in the constructor we add in step 4. The ns member specifies the namespace used in the XML data returned by the Service Management API. In apiVersion, we specify the API version sent with each request to the Service Management API. The supported versions are listed on MSDN at http://msdn.microsoft.com/en-us/library/gg592580.aspx. Note that the API version is updated whenever new operations are added and that an operation can use a newer API version than the one it was released with.

In step 5, we retrieve the X.509 certificate we use to authenticate against the Service Management API. We use the certificate thumbprint to find the certificate in the Personal/Certificates (My) branch of the Current User level of the certificate store.

In step 6, we create and initialize the HttpWebRequest used to submit an operation to the Service Management API. We add the X.509 certificate to the request. We also add an x-ms-version request header specifying the API version of the Service Management API we are using.

In step 7, we submit the request to the Service Management API. We first construct the appropriate URL, which depends on the particular operation to be performed. Then, we make the request and load the response into a XElement. We query this XElement to retrieve the names of the hosted services created for the subscription and return them in an IEnumerable<String>.

In step 8, we add a helper method that invokes each of the methods we added earlier. We must replace {SUBSCRIPTION_ID} with the subscription ID for the Windows Azure subscription and replace {THUMBPRINT} with the thumbprint of the management certificate. These can both be found on the Windows Azure Portal.

There's more...

The Microsoft Management Console (MMC) can be used to navigate the certificate store to view and, if desired, export an X.509 certificate. In MMC, the Certificates snap-in provides the choice of navigating through either the Current User or Local Machine level of the certificate store. The makecert command used earlier inserts the certificate in the Personal branch of the Current User level of the certificate store. The export certificate wizard can be found by selecting the certificate All Tasks on the right-click menu, and then choosing Export…. This wizard supports export both with and without a private key. The former creates the .CER file needed for a management certificate while the latter creates the password-protected .PFX file needed for a service certificate for a hosted service.

The thumbprint is one of the certificate properties that are displayed in the MMC certificate snap-in. However, the snap-in displays the thumbprint as a space-separated, lower-case string. When using the thumbprint in code, it must be converted to an upper-case string with no spaces.

 

Authenticating with the Windows Azure AppFabric Caching Service


The Windows Azure AppFabric Caching Service provides a hosted data cache along with local caching of that data. It provides a cloud-hosted version of the Windows Server AppFabric Caching Service.

All access to the caching service must be authenticated using a service namespace and an authentication token. These are generated on the Windows Azure Portal. The service namespace is similar to the account name used with the storage services. It forms the base of the service URL used in accessing the caching service.

The Windows Azure Access Control Service (ACS) is used to authenticate requests to the caching service. However, the complexity of this is abstracted by the caching service SDK. A DataCacheSecurity instance is constructed from the authentication token. To reduce the likelihood of the authentication token remaining in memory in an accessible form, the DataCacheSecurity constructor requires that it be passed in as a SecureString rather than a simple String. This DataCacheSecurity instance is then added to a DataCacheFactoryConfiguration instance. This is used to initialize the DataCacheFactory used to create the DataCache object used to interact with the caching service.

In this recipe, we will learn how to authenticate to the Windows Azure AppFabric Caching Service.

Getting ready

This recipe uses the Windows Azure AppFabric SDK. It also requires the creation—on the Windows Azure Portal—of a namespace for the Windows Azure AppFabric Caching Service. We see how to do this in the Creating a namespace for the Windows Azure AppFabric recipe in Chapter 9.

How to do it...

We are going to authenticate against the Windows Azure AppFabric Caching Service and cache an item in the service. We do this as follows:

  1. Add a class named AppFabricCachingExample to the project.

  2. Add the following assembly references to the project:

    Microsoft.ApplicationServer.Caching.Client
    Microsoft.ApplicationServer.Caching.Core
  3. Add the following using statements to the top of the class file:

    using Microsoft.ApplicationServer.Caching;
    using System.Security;
  4. Add the following private members to the class:

    Int32 cachePort = 22233;
    String hostName;
    String authenticationToken;
    DataCache dataCache;
  5. Add the following constructor to the class:

    AppFabricCachingExample(String hostName, String authenticationToken)
    {
       this.hostName = hostName;
       this.authenticationToken = authenticationToken;
    }
  6. Add the following method, creating a SecureString, to the class:

    static private SecureString CreateSecureString(String token)
    {
       SecureString secureString = new SecureString();
       foreach (char c in token)
       {
          secureString.AppendChar(c);
       }
       secureString.MakeReadOnly();
       return secureString;
    }
    
  7. Add the following method, initializing the cache, to the class:

    private void InitializeCache()
    {
       DataCacheSecurity dataCacheSecurity =new DataCacheSecurity(CreateSecureString(authenticationToken), false);
    
       List<DataCacheServerEndpoint> server =new List<DataCacheServerEndpoint>();
       server.Add(new DataCacheServerEndpoint(hostName, cachePort));
    
       DataCacheTransportProperties dataCacheTransportProperties =new DataCacheTransportProperties()
       {
          MaxBufferSize = 10000,
          ReceiveTimeout = TimeSpan.FromSeconds(45)
       };
    
       DataCacheFactoryConfiguration dataCacheFactoryConfiguration= new DataCacheFactoryConfiguration()
       {
          SecurityProperties = dataCacheSecurity,
          Servers = server,
          TransportProperties = dataCacheTransportProperties
       };
    
       DataCacheFactory myCacheFactory =new DataCacheFactory(dataCacheFactoryConfiguration);
       dataCache = myCacheFactory.GetDefaultCache();
    }
  8. Add the following method, inserting an entry to the cache, to the class:

    private void PutEntry( String key, String value)
    {
       dataCache.Put(key, value);
    }
  9. Add the following method, retrieving an entry from the cache, to the class:

    private String GetEntry(String key)
    {
       String playWright = dataCache.Get(key) as String;
       return playWright;
    }
  10. Add the following method, invoking the methods added earlier, to the class:

    public static void UseAppFabricCachingExample()
    {
       String hostName = "{SERVICE_NAMESPACE}.cache.windows.net";
       String authenticationToken = "{AUTHENTICATION_TOKEN}";   
    
       String key = "{KEY}";
       String value = "{VALUE}";
    
       AppFabricCachingExample example =new AppFabricCachingExample();
    
       example.InitializeCache();
       example.PutEntry(key, value);
       String theValue = example.GetEntry(key);
    }

How it works...

In steps 1 through 3, we set up the class. In step 4, we add some private members to hold the caching service endpoint information and the authentication token. We initialize these in the constructor we add in step 5.

In step 6, we add a method that creates a SecureString from a normal String. The authentication token used when working with the caching service SDK must be a SecureString. Typically, this would be initialized in a more secure fashion than from a private member.

In step 7, we first initialize the objects used to configure a DataCacheFactory object. We need to provide the authentication token and the caching service endpoint. We specify a ReceiveTimeout of less than 1 minute to reduce the possibility of an error caused by stale connections. We use the DataCacheFactory to get the DataCache for the default cache for the caching service. Note that in this recipe, we did not configure a local cache.

In step 8, we insert an entry in the cache. Note that we use Put() rather than Add() here, as Add() throws an error if the item is already cached. We retrieve it from the cache in step 9.

In step 10, we add a helper method that invokes each of the methods we added earlier. We must replace {SERVICE_NAMESPACE} and {AUTHENTICATION_TOKEN} with actual values for the caching service namespace and authentication token that we created on the Windows Azure Portal. We can replace {KEY} and {VALUE} with appropriate values.

About the Author
  • Neil Mackenzie

    Neil Mackenzie has worked with computers for nearly 3 decades. He started his computer career doing large-scale numerical simulations for scientific research and business planning. Since then, he has primarily been involved in healthcare software and developing electronic medical record systems. He has been using Microsoft Azure since PDC 2008 and has used nearly all parts of the Microsoft Azure platform, including those parts that no longer exist. Neil is very active in the online Microsoft Azure community, contributing to the MSDN Microsoft Azure forums in particular. He is a Microsoft MVP for Microsoft Azure.

    Browse publications by this author
Microsoft Windows Azure Development Cookbook
Unlock this book and the full library FREE for 7 days
Start now