Creating SPNs [Service Principal Names], Service Plans, Azure Web Apps

Every time I deploy a webapp, via VSTS or Octopus, service principal creation process and web app deployment is assumed to be manual and requires to be configured in advance of deployment process. There is always a drop down without allowing you put new service principal name,  and a web app name. [Hope this will change soon, and this post will be unnecessary :)] The meetup demos I attend, the same as well as the msdn documentations don’t mind showing how to add these manually.

However our Azure governance model is Functional Pattern, requires one subscription per environment, and one SPN per resource group, I should be able to create SPN per each environment automatically from scratch to automate our pipeline, plus I don’t like doing things manually…

VSTS Services Octopus Accounts
Image Image

Part I: Creating SPNs

So, what is an SPN? Think of service accounts. For each application [essentially an identifier Uri], you create a service principal, with a password [or a certificate], and a homepage url.
Azure has RBAC, so you can set any level of permission to any object/user. In basic, it has “Reader”, “Contributor” and “Admin” permissions. For simplicity I will use Contributor role within a specific ResourceGroup. You may want to set a granular permission on the resource group, such as creating app service plans, website by a different service principal, or you may say “Sky is the limit” you can have your role definitions too! Check out the msdn site for role definitions.


function New-AzureSpn{
param([string]$Subscriptionid ,
[string]$environmentName,
[string]$ApplicationName,
[string]$resourceGroupName,
[string]$location,
[string]$password
)
#Login-AzureRmAccount
$DisplayName =$ApplicationName+ $environmentName + "SPN"
$HomePage = "http://$applicationName.clouddev.com"
$IdentifierUri = "http://$applicationName.clouddev.com"
#refer the first script
###########################################################################
#Step1: Check there is no existing AzureAD Application with the same Uri:
#Names are not unique, but the IdentifierUri should be unique
###########################################################################
$clientApplication=Get-AzureRmADApplication -IdentifierUri $identifierUri
If ($clientApplication) {
Write-Output "There is already an AD Application for this URI: $identifierUri"
#Either remove, stop the process and rename the Uri, or get the ADApplication, which is the default behavior here:
#Remove-AzureRmADApplication -ObjectId $clientApplication.ObjectId -Force
}
else
{
$clientApplication = New-AzureRmADApplication -DisplayName $displayName -HomePage $homePage -IdentifierUris $identifierUri -Password $password -Verbose
Write-Output "A new AzureAD Application is created for this URI: $identifierUri"
}
################################################
#Step2: Create Application and SPN:
################################################
$clientId = $clientApplication.ApplicationId
Write-Output "Azure AAD Application creation completed successfully (Application Id: $clientId)" -Verbose
if((Get-AzureRmADServicePrincipal -ServicePrincipalName $clientId -ErrorAction SilentlyContinue) -eq $null){
$spn = New-AzureRmADServicePrincipal -ApplicationId $clientId
}
else {
$spn = Get-AzureRmADServicePrincipal -ServicePrincipalName $clientId
}
# This will allow it to regenerate keys of all the Storage Accounts in the subscription
$spnRole = "Contributor"
$resourceGroup=Get-AzureRmResourceGroup -Name $resourceGroupName -ErrorAction SilentlyContinue
if ( $null -eq $resourceGroup)
{
New-AzureRmResourceGroup -Name $resourceGroupName -Location $location
}
New-AzureRmRoleAssignment -RoleDefinitionName $spnRole -ServicePrincipalName $clientId -Verbose -ResourceGroupName $resourceGroupName
################################################
#Step3: Login to verify the SPN
################################################
#If you have not yet,now we can login and verify our new spn:
#Login-AzureRmAccount -Credential $creds -ServicePrincipal -TenantId $tenantId
#Use the subscriptionId parameter from loginazuresubscription script.
$tenantId= (Get-AzureRmSubscription -SubscriptionId $subscriptionid).TenantId
$objectId=$spn.Id
#Cleanup actions enable below if you wanted to clean up the azure ad application
#$clientApplication | Remove-AzureRmADApplication -ObjectId $_.ObjectId -Force
################################################
#Step4: Get the packer/VSTS/Octopus info
################################################
Write-Host 'subscription_Id :' $subscriptionid.tostring()
Write-Host 'tenant_Id : ' $tenantId.ToString()
Write-Host 'object_id :' $objectId.ToString()
write-Host 'client_id/Username/SPN Name :' $clientId.ToString()
write-Host 'client_secret/Password : ' $password.ToString()
Write-Host 'spn_displayname: ' $DisplayName.Tostring()
}
$params= @{
Password=New-Guid
ResourceGroupName='resourcegroup'
Subscriptionid='xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
Location='location'
ApplicationName='app'
EnvironmentName='dev'
}
Login-AzureRmAccount
New-AzureSpn @params

Part II: Creating Service Plan and WebApps

For each web application, we need a service plan, like a hosting plan to define:
– Region (West US, East US, etc.),
– Scale count (one, two, three instances, etc.)
– Instance size (Small, Medium, Large)
– SKU (Free, Shared, Basic, Standard, Premium)
And we will deploy our webapp on a service plan with the service principal we have created. The nice thing is Get-AzureRmWebAppPublishingProfile gives you all the deployment account details it has just created [if you are thinking of other deployment methods].
And, one thing we found useful was to set ‘AppServiceUse32BitWorkerProcess’ to true. [Shanselman has a great post about it!]


###############################################
function Add-Account {
param(
[string]$AzureTenantId,
[string]$AzureServicePrincipalName,
[string]$AzureSPNPassword
)
###############################################
##Step1: Get Variables
$SPNNamingStandard='^[–z]{5,40}$'
###############################################
##Step2: Validate Variables:
if (!($AzureTenantId -match $SPNNamingStandard))
{
Write-Output "SPN is not in the right format"
}
###############################################
##Step3: Create Account
Write-Output "Creating Account"
$SecurePassword = ConvertTo-SecureString -asplaintext -force $AzureSPNPassword
$SecureCredential = New-Object System.Management.Automation.PSCredential ($AzureServicePrincipalName, $SecurePassword)
write-output '###############################################'
write-output '##Step4: Login to the SPN Account'
try{
write-output "Adding AzureRM Account"
Add-AzureRmAccount -ServicePrincipal -Tenant $AzureTenantId -Credential $SecureCredential
}
catch {
Write-Output $_
throw "Cannot add account $AzureServicePrincipalName"
}
}
###############################################
###############################################
## Check and Create Service Plan
function New-AzureAppServicePlan{
param([string]$ResourceGroupName,
[string]$AppServicePlanName,
[string]$Location,
[string]$AppServicePlanNumberofWorkers,
[string]$AppServicePlanWorkerSize,
[string]$AppServicePlanTier
)
try{
$ServicePlan= Get-AzureRmAppServicePlan -ResourceGroupName $ResourceGroupName -Name $AppServicePlanName -ErrorAction SilentlyContinue
if ($null -eq $ServicePlan)
{
$ServicePlan=New-AzureRmAppServicePlan -Name $AppServicePlanName -Location $Location -ResourceGroupName $ResourceGroupName -Tier $AppServicePlanTier -WorkerSize $AppServicePlanWorkerSize -NumberofWorkers $AppServicePlanNumberofWorkers
}
}
catch{
Write-Output "Cannot add serviceplan : $AppServicePlanName "
Write-Output $_
Throw "Something went wrong"
}
return $ServicePlan
}
###############################################
## Check and Create Web App
function New-AzureWebApp {
param(
[bool]$AppServiceUse32BitWorkerProcess,
[string]$Location,
[string]$PublishProfilePath,
[string]$ResourceGroupName,
[string]$WebAppName
)
try{
$WebApp = Get-AzureRmWebApp -ResourceGroupName $ResourceGroupName -Name $WebAppName -ErrorAction SilentlyContinue
if($null -eq $WebApp)
{
$WebApp = New-AzureRmWebApp -Name $WebAppName -AppServicePlan $AppServicePlanName -ResourceGroupName $ResourceGroupName -Location $Location
}
Set-AzureRmWebApp -ResourceGroupName $ResourceGroupName -Name $WebAppName -Use32BitWorkerProcess $AppServiceUse32BitWorkerProcess
if (!(Test-Path -Path (split-path $PublishProfilePath -Parent))){
throw [System.IO.FileNotFoundException] "$PublishProfilePath not found."
}
$profile = Get-AzureRmWebAppPublishingProfile -OutputFile $PublishProfilePath -ResourceGroupName $ResourceGroupName -Name $WebAppName -Format WebDeploy -Verbose
if ($profile){
([xml] $profile).publishData.publishProfile | select publishMethod, publishurl, username, userPWD
}
else{
throw "There was a problem with your publishprofile, check your webapp"
}
}
catch{
Write-Output "Cannot add webapp : $WebAppName"
Write-Output $_
}
return $WebApp
}
###############################################
##Step1: Define the Variables
$ServicePlanParams= @{
ResourceGroupName = "resourcegroup"
Location = "NorthEurope"
AppServicePlanName = "AppServicePlanName"
AppServicePlanTier = "Basic"
AppServicePlanWorkerSize = "Small"
AppServicePlanNumberofWorkers =3
}
$WebAppPlanParams= @{
WebAppName = "wineAppDemo20170809"
ResourceGroupName = "resourcegroup"
Location = "NorthEurope"
TimeStamp = Get-Date -Format ddMMyyyy_hhmmss
PublishProfilePath = Join-Path -Path $ENV:Temp -ChildPath "publishprofile$TimeStamp.xml"
AppServiceUse32BitWorkerProcess=$true
}
$AzureAccountParams= @{
AzureTenantId=$AzureTenantId
AzureServicePrincipalName=$AzureServicePrincipalName
AzureSPNPassword=$AzureSPNPassword
}
Add-Account @AzureAccountParams
New-AzureAppServicePlan @ServicePlanParams
New-AzureWebApp @WebAppPlanParams

TeamCity running on Docker

One of the sessions at JaxLondon, Paul Stack mentioned they were running TeamCity on containers at HashiCorp. Because I am doing quite a number of trainings, demos, talks about Continuous Delivery, having the CI server/agents portable and containerised is a big win for me. After I saw JetBrains has the official docker image [for the server and the agents] at DockerHub, I decided to do it sooner, than later.
There are quite things I will cover to have a good touch on containers.


Step1: Setup:
I will use docker’s Mac Toolbox to create TeamCity server and agents. There will be 2 folders required on my host for TeamCity server: data folder, and logs folder to be introduced as volumes to the server container.



Step2: Creating VirtualBox VMs:

I have my Mac Toolbox installed on Mac. Why not Docker-for-Mac, purely I want to rely on VirtualBox to manage my machines, and keep environment variables for Virtualbox VMs.

mymac:~ demokritos$ docker-machine create --driver virtualbox teamcityserver
mymac:~ demokritos$ docker-machine start teamcityserver
Starting "teamcityserver"... (teamcityserver) Waiting for an IP...
Machine "teamcityserver" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses.
You may need to re-run the `docker-machine env` command.
mymac:~ demokritos$ docker-machine env teamcityserver
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/demokritos/.docker/machine/machines/teamcityserver" export DOCKER_MACHINE_NAME="teamcityserver"
# Run this command to configure your shell: # eval $(docker-machine env teamcityserver)
mymac:~ demokritos$ eval $(docker-machine env teamcityserver)


Step2: Create and share our volumes:

We need to create folders, give permission to our group [my user is in wheel group], and share folder with docker. You can see `stat $folder` to display the permissions.

mymac:~ demokritos$ sudo mkdir -p /opt/teamcity_server/logs
mymac:~ demokritos$ sudo mkdir -p /data/teamcity_server/datadir
mymac:~ demokritos$ sudo chmod g+rw /opt/teamcity_server/logs
mymac:~ demokritos$ sudo chmod g+rw /data/teamcity_server/datadir

And share on Docker preferences
2 Folders to share

This will avoid errors like :
docker: Error response from daemon: error while creating mount source path ‘/opt/teamcity_server/logs’: mkdir /opt/teamcity_server/logs: permission denied.


Step3: Run the docker:

sudo docker run -it --name teamcityserver \
-e TEAMCITY_SERVER_MEM_OPTS="-Xmx2g -XX:MaxPermSize=270m \
-XX:ReservedCodeCacheSize=350m"
-v /data/teamcity_server/datadir:/data/teamcity_server/datadir \
-v /opt/teamcity_server/logs:/opt/teamcity_server/logs \
-p 50004:8111 jetbrains/teamcity-server

If you get an error like :

docker: Error response from daemon: Conflict.
The container name "/teamcityserver" is already in use
by container 4143c2d13192b8020f066b13a2c033750b4ac1ac7d54e822a6b31a5f47489647.
You have to remove (or rename) that container to be able to reuse that name..

Then, if you can find them with “ps -aq”, you can remove them in your terminal, if not open a new one and remove it, i.e:

 docker rm 4143c2d13192 

There is a long discussion on moby’s github site, if you are interested in …

And TC server is ready to be configured… Next, we will set up the agents…

Installing Zabbix 3.2 on AWS Ubuntu 16.04

Hello,

I had a challenge, to get my Zabbix server up and running on AWS. This initial version is on bash scripts, next versions will be smarter… Zabbix version I will install is 3.2.

A. Setup:

  • Image: Ubuntu Server 16.04 LTS (HVM), SSD Volume Type – ami-a8d2d7ce
  • Type: t2.micro
  • Storage: 8 gig
  • Tag: Name = Zabbix
  • Security group:  SSH [TCP/22], Http[TCP/80] and Http[TCP/10050] for access from anywhere.

B. Installations for Zabbix Server:
#Get the updated repos and install LAMP server. Notice the ^.

$ sudo apt-get update
$ sudo apt-get install lamp-server^

Note the password for mysql as to be used later on :

$ sudo service apache2 restart
$ sudo systemctl enable apache2
$ wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb
$ dpkg -i zabbix-release_3.2-1+xenial_all.deb
$ apt-get update
$ sudo apt-get install zabbix-server-mysql zabbix-frontend-php
$ sudo service mysql start

To secure our sql, we need configure options. Say No to change password, Yes to the rest of others questions.

$ sudo mysql_secure_installation

We will create the database zabbix and set a new password. Keep the quotation marks. Notice, we will use the to connect mysql.

$ mysql -uroot -p 
mysql> create database zabbix character set utf8 collate utf8_bin;
mysql> grant all privileges on zabbix.* to zabbix@localhost identified by ''; mysql> quit;

We need to restore the zabbix database onto the one we created. It will prompt you to enter the  to connect the zabbix database.

$ cat /usr/share/doc/zabbix-server-mysql/create.sql.gz |
 mysql -uzabbix -p zabbix

We also need to keep the password in zabbix server configuration:

$ sudo vi /etc/zabbix/zabbix_server.conf
>DBHost=localhost
>DBName=zabbix
>DBUser=zabbix
>DBPassword=‘’
$ sudo service zabbix-server start
$ sudo update-rc.d zabbix-server enable

Change /etc/zabbix/apache.conf, uncomment the php_value for date.timezone to your relevant timezone.

$ sudo vi /etc/zabbix/apache.conf
>php_value date.timezone Europe/London

Restart the apache server:

$ service apache2 restart

Browse your http:///zabbix :
Untitled 16

Note1: If you get errors on the page:
Error1:

PHP bcmath extension missing (PHP configuration parameter --enable-bcmath).
PHP mbstring extension missing (PHP configuration parameter --enable-mbstring).
PHP xmlwriter extension missing.
PHP xmlreader extension missing.</span>

Run on the server:

$ sudo apt-get install php-bcmath
$ sudo apt-get install php-mbstring
$ sudo apt-get install php-xml

Error2:
Zabbix discoverer processes more than 75% busy
Solution:

$ sudo vi zabbix_server.conf
sudo service zabbix-server restart
sudo service apache2 restart 

Error3:
Lack of free swap space on Zabbix server

sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048
sudo chmod 600 /var/swapfile
sudo mkswap /var/swapfile
echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab
sudo swapon -a 

C. Add agents to Centos/Ubuntu machines :
#Installing Zabbix agent on Ubuntu 16.04:

sudo wget http://repo.zabbix.com/zabbix/3.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.0-1+xenial_all.deb
sudo dpkg -i zabbix-release_3.0-1+xenial_all.deb
sudo apt-get update
sudo apt-get install zabbix-agent
sudo service zabbix-agent start

Installing Zabbix agent on Centos 7.3:

sudo rpm -ivh http://repo.zabbix.com/zabbix/3.0/rhel/7/x86_64/zabbix-release-3.0-1.el7.noarch.rpm
sudo yum update
sudo yum install zabbix-agent
sudo service zabbix-agent start

Error4:
Agent is not starting on Centos 7.3, Permission denied:

Investigations: 

# tail -3 /var/log/zabbix/zabbix_agentd.log
...
$ cat /var/log/audit/audit.log | grep zabbix_agentd | grep denied | tail -1
type=AVC msg=audit(1494325619.250:1410): avc: denied{ setrlimit } forpid=26242 comm="zabbix_agentd" scontext=system_u:system_r:zabbix_agent_t:s0 tcontext=system_u:system_r:zabbix_agent_t:s0 tclass=process

Solution :
Get the required policy and apply the output displayed:

 sudo cat /var/log/audit/audit.log | grep zabbix_agentd | grep denied | tail -1 | sudo audit2allow -M zabbix_agent_setrlimit
******************** IMPORTANT ***********************
To make this policy package active, execute:
semodule -i zabbix_agent_setrlimit.pp</span></pre>
# sudo semodule -i zabbix_agent_setrlimit.pp
# sudo systemctl daemon-reload
# sudo systemctl start zabbix-agent

JaxDevops 2017

I had the chance to attend JaxDevOps London, here is a valuable session from Daniel Bryant about the common mistakes done for Microservices…

  1.  7 (MORE) DEADLY SINS:
    1. Lust [Use the Unevaluated Latest and Greatest Tech]:
      1. Be an expert on Evaluation
      2. Spine Model: Going up the spine solves the problems, not the first step: Tools, but Practices, Principles, Values, Needs.
    2. Gluttony: Communication Lock-In
      1. Don’t rule out RPC [eg. GRPC]
      2. Stick to the Principle of Least Surprise: [Json over Https]
      3. Don’t let API Gateway murphing into EBS
      4. Check the cool tools: Mulesoft,Kong, Apigee, AWS API Gateway
    3. Greed: What Is Mine [within the Org]
      1. “We’ve decided to reform our teams around squads, chapters, and Guilds”:  Be aware of Cargo-Culting:
    4. Sloth: Getting Lazy with NFR:
      1. Ilities: “Availability, Scalability, Auditability, Testability” can be Afterthought
      2. Security: Aaron Grattafiori DockerCon2016 Talk/InfoQ
      3. Thoughtworks: AppSec & Microservices
      4. Build Pipeline:
        1. Perfromance and load testing:
          1. Gatling/JMeter
          2. Flood.IO [upload Gatling script/scale]
        2. Security Testing:
          1. FindSecBugs/OWasp dependency check
          2. Bdd-Security (Owasp Zap)/ Arachi
          3. Gaunltl /Serverspec
          4. Docker Bench for security/Clair
    5. Wrath: Blowing Up When Bad Things Happen
      1. Michael Nyard (Release It) : Turn ops to Simian Army
      2. Distributed Transactions:
        1. Don’t push transactional scope into Single Service
        2. Supervisor/Processor Manager: Erlang OTP, Akka, EIP
      3. Focus on What Matters:
        1. CI/CD
        2. Mechanical Sympathy
        3. Logging
        4. Monitoring
      4. Consider:
        1. DEIS
        2. CloudFoundry
        3. OpenShift
    6. Envy: The Shared Single Domain and (Data Store) Fallacy
      1. Know your DD:
        1. Entities
        2. Value Objects
        3. Aggregates and Roots
        4. Book:
          1. Implementing Domain-Driven Design
          2. Domain-Driven Distilled [high level]
            1. Context Mapping [Static] & Event Storming [Dynamic]
              1. infoq
              2. ziobrando
            2. Data Stores:
              1. RDBMS:
              2. Cassandra
              3. Graph -> Neo4J, Titan
              4. Support! Op Overhead
    7. Pride: Testing in the World
      1. Testing Strategies in a Microservice Architecture [Martin Fowler]
      2. Andew Morgan [Virtual API Service Testing]
      3. Service Virtualisation:
        1. Classic Ones:
          1. CA Service Virtualization
          2. Parasoft Virtualize
          3. HPE Service Virtualization
          4. IBM Test Virtualization Server
        2. New kids:
          1. [SpectoLabs] Hoverfly: Lightweight
            1. Fault Injection
            2. Chaos Monkey
          2. Wiremock
          3. VCR/BetaMax
          4. MounteBank
          5. Mirage

 

 

 

 

 

 

 

 

Getting latest workspace…

Getting the latest code from all workspaces can be time consuming, forgetting to do so can cause bigger issues…

So, here is the remedy:

There is a hard coded “d” drive to change the drive and navigate to the code source folders. If your code is on c drive you can just remove it…

************************************************

get latest

************************************************

I think it will be handy if it has more error handling as a report at the end, but for a quick solution, it is available…

Business Model Canvas

 

I have ideas. Many of them actually, every day. Especially when going to sleep and just before waking up I have loads of them, so I write them down in my RTM. Every month or so I clean up this list of ideas and edit, delete, and merge them according my feelings. Some ideas make it through several of these inconsequent selections. Those ideas are the ones I’d like to develop further – inventing a business model for them to make it work. So I start writing up an executive summary of maximum 2 pages according Guy Kawasaki’s blog post and there you go; another idea that needs a business model, team, thinking, investment, etcetera.

At this point I somehow can’t seem to take things forward; shipping it. So far, I have read many management books, “VC recommendations”, and blogs about how to make your business model sustainable, to somehow “fit” into the market you want to conquer. However, all these books don’t do it for me – they all provide too much text (is a 220 page guide still helpful?) and rules of “what to do” (based on the past) and not “how to do it” (better = sustainable). I was missing a strong framework that forces me to make sense of all my loose thoughts, while focusing on the business model itself and the future, learning from past success formulas and proven strategies (we all learn from the past), but without holding on to them too much.

Since last week, I’ve been reading up on Business Model Generation by Alexander Osterwalder (72 page preview here). This book is awesome! I was introduced to Alex by my friend Anne McCrossan about a year ago in regards to Somesso, but I didn’t get the chance to read this book until last week. Alex is a Swiss entrepreneur who teaches systematic approaches to business model innovation. The book is innovative on its own as it’s co-created by 470 other experts (not just by anyone – participants had to pay to join the dialogue). How’s that for innovation?!

This book is really easy to digest and fits well into my “low information diet”, which I wrote about earlier. In short and overseeable sections it provides an overview of the learnings from proven strategies and concepts like “blue oceans” (W. Chan Kim and Renée Mauborgne), “the long tail” and “FREE” (Chris Anderson), multi-sided platforms and open business models. Also the business model canvas is introduced (see below), which is indeed very handy. Thanks Alex and the 470 others who helped publishing this great guide! ”

Post is by Arjen, click here if you want to read the post from the original url.

Stakeholder Engagement

Stakeholder engagement is an important process to be carried through any project. The involvement and engagement can add value and increase the life of the projects that goes live.

Prince2 [Project Management Framework] suggests a good framework for the process:

1. Identification : Know your target people. Who is going to be affected by this project?
2. Analysis of Profiles: This creates inclusive environment where stakeholders’ points of views, influence power, conflicts, interests and tradeoffs can be elicitated. We can divide this group into 4 in the most basic form:
a. Support or oppose the project
b. Gain or lose as a result of the project delivery
c. See the project as a threat or enhancement to their position
d. Become active supporters or blockers of the project/its progress.
3. Defining strategy: The communication stragety will be defined:
a. For each profile, the method, format and frequency of the communication
b. The message sender and receipent are decided
c. What information will be communicated ?
4. Planning strategy: With the correct communicator, the negotiations’ timing and method will be planned.
5. Engaging stakeholders  ( Negotiations and Partnership): Carry out the plan.
6.Checking effectiveness (Monitoring): What are the results?
Image

Verifying the xml send to webservice

Although there are nice tools, like SoapUI for testing webservices, the hard part I found was verifying the xml send over http. The parameters were sent in correct format, the xml schema verified, but the values are placed as they supposed to be? The list goes on…
Netmom, Network Monitor, available under Administrative Tools, with WireShark solves the problem instantly. Quite easy to use, and the benefit is priceless!

There are just four steps to set up the Netmom, and Wireshark is good at displaying the result.

Step1:Open Menu->Capture->Addresses->Add
Put IP Address in Name and Address, then OK e.g. destination = 192.168.241.10

Image
Adding Address

Step2:Open Menu->Capture -> Filter
i) Highlight Address Pairs
ii) Address -> Add
iii) Select:
* MCJS-WKS004 172.23.63.154 on left hand side i.e. your own machine NOT the 127.0.0.1 one
* Choose the destination IP Address in right hand column
iv) If no traffic is captured – then choose ANY as destination
Image
Filtering Address

Step 3: Menu-> Capture -> Start and you can run your test now.

Step 4: Open the saved file with Wireshark.

BCS Women in Tomorrow’s World

On Thursday [21st January 2010], I have attended to a seminar called “Tomorrow’s World Tomorrow’s Women” [TWTW] organised by BCS. Dr Sue Black is the organiser and founder of BSC Women in 2001, which has more than 1500 members. It was a nice interactive seminar about what should be done, and a bit evaluation of what has been done.
The bullet list from the seminar I got are:
-Apraisal for women should include more positive critics [around %90] and less negative critics [%10]. Women will always be sitting on the negative side…
– Girls at the age 10 can be motivated by meeting women proffesionals in IT, having a day would help a lot.
– After a certain age, people can select second jobs, so maybe women could be encouraged to shift to IT eve if they have not got education for IT.
– Instead of importing IT people, UK should grow its own resources…
– At the development role, women are stuck. They may work on managerial skills more to have a position on board, and influence people…
– %7 of board members of IT Telecom companies in Europe are women…
– Women, earning around £80k, are not interested in new positions offering more… [family life, husband is earning already…]

Wishing for more women in IT will not help, and should be put into pratice obviously, any ideas?

HOW TO CREATE YOUR OWN WORK ITEM?

TFS work item identifies a piece of work for the project. Project managers can use this facility to keep track of the work done by developers, and developers can benefit from clear, specific work, with having the flexibility integrated into their development environment. Real time status of the items helps the projects to asset the risks, and delivers the project on time.
The existing work items are:
Scenario A description of the user’s need or request.
Bug A defect or deviation between expected and observed behaviour in the product.
Quality of Service Requirement An expected deliverable of the final product. The deliverable can be an outcome, a problem solved, a feature, and so on.
Task A stand-alone action that must be accomplished by a person or group of people.
Risk A probable event or condition that can have a potentially negative outcome on the project in the future.
Instead of creating from stretch, we are going to export a work item from TFS, modify it, and import it as a new work item. At the very last part, I’ll show how to remove the work item.

Part 1: Exporting Task Work Item

To modify any of work items, you may use Power tools, VS2008 Command Prompt, and run:
witexport /f " c:\temp\task.xml" /t "xtf109" /p JLD /n Task

which stands for

f: file location
t: TFS Server Name
p: TFS Team Project Name
n: Work Item Type you want to export.

First, you may want to change value of the name of the “WORKITEMTYPE.”

The xml file has three categories:
I. FIELDS
II. WORKFLOW
III. FORM

I. FIELDS:
Fields defines which fields you want to show for that work item and what kind of behaviour they are going to display. You can remove any field you do not want.
Each field may have:
– HELPTEXT:
– REQUIRED or EMPTY
– The field may ask for a VALIDUSER
– The field may let only ALLOWEDVALUES, where you may modify “LISTITEM” for the dropdown values, i.e. Severity, Issue, Triage, and Priority…
– And the field may let to select one of the values listed, but can stay as empty as well with SUGGESTEDVALUES.
– DEFAULT

There is only one exception. “State Change Date” has a default clock as WHENCHANGED property. All other fields reference this value as WHENNOTCHANGED, especially the state change fields.

II. WORKFLOW
Workflow defines how the states of the work items are controlled, and it is divided into two:
A. States
B. Transitions

A. States:
At States fields, the required and empty fields are identified for the states. By default, there are three flows:
1. Proposed
2. Active
3. Resolved
4. Closed
You can add new fields, change their mandatory properties.

B. Transition:
The transitions define the workflow between states. For each transition you can force the user to enter data, i.e. Assigned User, Reason for Deferring, etc.
By default, for task, there are 9 transitions allowed:
From “” to “Proposed”
from “Proposed” to “Active”
From “Active” to “Proposed”
From “Active” to “Resolved”
From “Active” to “Closed”
From “Resolved” to “Active”
From “Resolved” to “Closed”
From “Proposed” to “Closed”
From “Closed” to “Active”

III. FORM
The last part of xml has Form fields, which defines the layout of the form. The UI of the work item is defined here, and you can implement your own design, without any restriction. You can define all tabs, groups on a tab, type of the control [drop down or textbox], docking styles, alignments and of course their labels.

Part 2: Importing

witimport /f "c:\temp\ToDoForBackEndTeam.xml" /t "xtf109" /p "Test TFS Project"

which stands for:

f: file location
t: TFS Server Name
p: TFS Team Project Name
n: Work Item Type you want to export.

Be careful about that you can’t import the work item unless you have a brand new “WORKITEMTYPE”

Part 3: Removing work Item

The easiest part is removing the work item:

tfpt destroywitd /server:"xtf109" /project:"Test TFS Project" /workitemtype:"ToDoForEbru"