Mike Howells's Blog

Just another WordPress.com site

Why I’m Dumping my Windows Phone

Posted by Mike Howells on August 28, 2016

I’m an avid Windows Mobile user going back to the Windows Mobile 6.0 days with my Motorola Q9m. I loved that phone; one of the most reliable phones I have ever used. I then found myself upgrading along the way to the HTC 8X, but reliability issues forced me to abandon it. When the Nokia Lumia Icon came to market, I jumped all over it. That was two years ago…

The Icon came with Windows Phone 8, then eventually an upgrade to 8.1. Fortunately, the Icon made the upgrade list for Windows 10 Mobile. The upgrade to W10 mobile went flawlessly. Two bugs on the Icon were fixed for me including a speaker, which didn’t work unless I was on speakerphone and my right-channel audio didn’t work while taking videos. All fixed with the upgrade to W10 mobile. I was elated. To this date, the 20 MP front-facing camera is by far the best camera I’ve ever had. I’ve filmed all of my nephew’s cross-country and track & field events including endless dog photos. So, why would I abandon it?

The proverbial straw that broke the camel’s back, forcing my decision to abandon Windows Phone, was my company’s decision to move to o365 (Office 365). In the way we are implementing Office 365, we are requiring a feature called Modern Authentication. With Modern Authentication, you are required to authenticate to Microsoft using something other than a password, such as an authenticator app or a mobile SMS text message. Unfortunately, Microsoft isn’t interested in developing Modern Authentication into the Windows Phone ecosystem within an acceptable time period. The Office 365 blog post (linked above) says “Coming Soon,” but that was 9 months ago. I’ve given up all hope…

About a week ago, when my company migrated my account to o365, my native Skype for Business, native Outlook client and native Calendaring apps all ceased to function. Sure, the web version of Outlook can give me some access, but it’s not the same. I want the native client.

So, my hand is being forced. Microsoft has made it clear that they are developing apps for iOS and Android to appease the larger market share. My particular phone on my particular platform represents a niche within a niche within a niche. iOS and Android combined for a record 99% of smartphone sales last quarter. Windows Phone has an incredibly small 0.6% market share of the smartphone market. Within that 0.6% share, Windows 10 Mobile represents only 14% of Windows Mobile usage. My Nokia Lumia Icon (929) doesn’t even crack the top 5 Windows Phone handsets! This is the new world order under Satya Nadella’s leadership. Is it a bad thing? Perhaps not. Time will tell.

As of this writing, my upgrade date is approaching in 44 days. Sadly, my Icon retailing for $459.00 back in the day, now garners a measly $25.00 trade-in value.

2016-08-28_14-57-48

I will likely join the rest of the collective and purchase a newly minted iPhone 7 Plus, which should be available sometime next month. I will have all the niceties of all the apps that I need for work including apps that I’ve never been able to experience with the Windows Mobile platform.

That being said, however, all hope is not lost. There is talk, albeit rumor, that a Microsoft Surface Phone is in development. Some say it may be released in the spring of 2017. If so, that just may be enough to pull me back in. Time will tell…

Update: On November 14, 2016 I received my black iPhone 7 Plus. This phone is far superior to any phone I have ever owned including any Windows Phones. If Microsoft is to compete in this arena, they are going to need to hit a grand slam.

Update: On November 24, 2020 I received my silver iPhone 12 Pro Max. 

Advertisement

Posted in Phone | Leave a Comment »

Diary of a Garmin BITS Job Gone Bad

Posted by Mike Howells on February 27, 2014

It’s one of those e-mails that no one ever wants to receive…

Dear AT&T High Speed Internet Service Customer,

We want to remind you that your AT&T High Speed Internet service includes 150 gigabytes (GB) of data for each billing period..
You have exceeded 150 GB this billing period

What?!?

Of course, I believe it is an error. But, when I open my daily usage chart, I can clearly see this is no error:

Image

So, what in the world is downloading all of this data and why did it start on Wednesday the 19th?

I opened Microsoft’s Network Monitor and saw a multitude of requests to a Garmin subdomain caled nyc1.gdn.garminsource.net. It’s basically a CDN (Content Distribution Network) that Garmin utilizes to transfer high volume transactions such as map updates to its user base.

I have the Garmin Map Updater service installed. So, maybe it is downloading a new map for my Garmin device. But, 30 GB/day is far too excessive even for the largest of map updates.

I needed more tools at my disposal to determine what was happening. So, I downloaded and installed one of the best network bandwidth usage tools that I have ever come across. It’s called NetBalancer by SeriousBit. The NetBalancer desktop application allows you to view each process and how much bandwidth it is consuming. Once I opened the application, I could clearly see svchost.exe was consuming a rather large chunk of bandwidth.

Image

Now that the culprit was identified, how do I go about stopping it?

I suspected that Garmin utilized the BITS service. Utilizing the BITS service is a common practice for developers to use, which saves them the time from writing their own file transfer service. BITS stands for Background Intelligent Transfer Service. It’s an easily identifiable service, which can be stopped via the Services applet as shown below:

Image

As soon as I stopped the BITS service, the download immediately stopped and my bandwidth consumption returned to normal.

Another day passed and I re-opened NetBalancer and noticed that svchost.exe was consuming bandwidth again. I couldn’t believe it. The BITS service started itself up again. I even disabled the BITS service, which didn’t help. BITS would simply re-enable itself and then start itself. The activity started to feel malicious in nature.

It was at this point, I decided to uninstall everything Garmin on my desktop. For sure uninstalling my Garmin apps would fix it right?

Nope!

I uninstalled everything Garmin and the Garmin BITS jobs continued to consume my bandwidth with no end in sight. This had been going on for days. So, something must have gone terribly wrong with some Garmin code somewhere.

It was time to continue my investigation…

I found some BITS commands available via PowerShell.

The one I found to list the existing transfer jobs is this command get-Bitstransfer -allusers

Image

I then discovered there is a built-in command line utility called BITSADMIN that has all sorts of power!

I issued this command in an attempt to cancel all BITS jobs: BITSADMIN /reset /allusers

Image

No luck.

Of course, there is no reason given for the failure. But, after performing some research on canceling BITS jobs, it appears that you have to be logged in as the user who created the BITS job. So, how do you log in as NT AUTHORITY\SYSTEM? I actually blogged about this in 2011 in this blog article here: https://mikehowells.wordpress.com/2011/02/12/running-a-command-prompt-as-nt-authoritysystem/

Basically, you open a command prompt as administrator. Then, launch the SysInternals tool psexec.exe as SYSTEM and it will launch a command prompt as NT AUTHORITY\SYSTEM. I was feeling pretty confident that this would work.

Image

Nope. It failed miserably. The error indicates that the request failed because the user (i.e. SYSTEM) has not logged on to the network. This was a fatal blow because the NT AUTHORITY\SYSTEM account is not designed to gain access to the network. This is usually reserved for the NETWORK SERVICE account.

So, I decided to fire-up my good friend ProcMon. ProcMon, or Process Monitor, is another brilliantly written tool that is part of the SysInternals Suite. After launching ProcMon, I included only the process svchost.exe. I could then clearly see the folder that svchost.exe was accessing, which was: C:\ProgramData\Garmin\Core Update Service\MAP-NA-2014-40

It was clear to me that the Garmin uninstaller did not do a good job of cleaning-up after itself at all.

At this point, I had two options moving forward:

Option 1) Use the NetBalancer tool to limit the download/upload rate for svchost.exe. This was not preferable as many things use svchost.exe and it would have unintended consequences.

Option 2) Delete the C:\ProgramData\Garmin\ folder.

I opted for Option 2.

This stopped the BITS job and put it into an error state. At least it wasn’t downloading.

I now have two strikes against me from AT&T. If I go over my 150 GB threshold one more time, I will be charged $10 for each 50 GB over my limit. Why does AT&T have such a low threshold for its DSL user base? It’s basically AT&T’s way to force you into their U-Verse service. Even their U-Verse service only has a 250 GB/month limit, although I hear it’s not enforced.

I still have two remaining Garmin jobs that are sitting in a suspended state.

If anyone has any ideas how to delete these stale jobs I would like to hear from you!

Posted in Administration | 1 Comment »

Allowing FTP over HTTP with Microsoft Forefront Threat Management Gateway

Posted by Mike Howells on February 18, 2013

I’ve been working with Microsoft’s Forefront Threat Management Gateway (TMG) since it was released back in November 2009 and it continues to surprise me…

What I thought would be an easy protocol addition to an existing access rule turned out not to be the case.

I was notified of an issue whereby users were suddenly unable to access a FTP site that they had been using for years. Their method of accessing the FTP site was to enter the URL of the FTP site within the address line of Internet Explorer (IE) using this format: ftp://username:password@example.com

I found it interesting that this was working just fine with ISA Server 2004/2006 but suddenly did not work with TMG 2010. Considering that we just switched from ISA 2004/2006 to TMG 2010 it didn’t surprise me that something broke because I figured TMG is handling it differently, which it is.

When you access an FTP site within Internet Explorer, TMG treats it as “FTP Over HTTP” instead of just plain ‘ol vanilla FTP. Since this new protocol was not defined in any existing access rules I had to go into the Enterprise-level policy and add the “FTP Over HTTP” protocol to the access rule.

I selected my “FTP Access” rule at the Enterprise-level policy, went to the Protocols tab, selected Add and looked for the “FTP Over HTTP” protocol as I saw earlier. As hard as I looked, I could not locate this protocol!

2-18-2013 3-03-56 PM

I decided to open a case with Microsoft, who confirmed with me that this is a bug with TMG. Unfortunately, since TMG has been EOL’d (End-Of-Life’d) they have no plans to fix this.

The work-around to fix this problem is to add the access rule at the array-level. Unfortunately, this means a lot of manual work especially if you have numerous arrays to manage.

2-18-2013 4-15-21 PM

You can simply add the “FTP Over HTTP” access rule to your existing web access policy at the array-level. Or, more likely, you’ll want to create a separate access rule especially if you do not want everyone to have access to this protocol.

Shown here is the “FTP over HTTP Access” rule successfully added to the array-level policy:

2-18-2013 3-28-30 PM

When you log access to this rule you’ll notice another cool feature. You can see the developers of TMG anticipated that the password information for FTP sites are sent in clear-text and that this information may be easily viewed via the live logging session. So they remove the password during a live logging session as shown below:

2-18-2013 3-57-26 PM

There is one additional workaround available to you. If you do not want to create a special access rule in TMG to allow this behavior, you can make a change in Internet Explorer. Simply open Internet Explorer, select Tools, Internet Options, Advanced and scroll down to the Browsing section. In the Browsing section you will see a setting called “Enable FTP folder view (outside of Internet Explorer).” Checking this box will allow you to access FTP sites within Windows Explorer and it will not invoke the “FTP Over HTTP” protocol.

2-18-2013 4-03-34 PM

If you’d like to learn more about publishing FTP in ISA Server or TMG that a look at the following article. It is the most thorough discussion on FTP in ISA and TMG that I have seen:

http://microsoftguru.com.au/2010/08/27/troubleshooting-outbound-ftp-access-in-isa-tmg-server/

If you are interested in adding/allowing malware inspection for FTP access rules in TMG checkout the following article (normally this is not possible):

http://carbonwind.net/blog/post/Forefront-TMG-2010-Using-malware-inspection-and-URL-filtering-for-FTP-on-access-rules.aspx

Posted in Firewalls | Tagged: , , , , | 3 Comments »

IP2MAC

Posted by Mike Howells on January 8, 2012

In all previous instances when working with Windows NLB (Network Load Balancing) I have always used Unicast. Recently, I ran into a scenario where Unicast was not allowed in a VMWare environment, which forced the use of Multicast instead. Apparently, using VMotion works with Multicast as opposed to using Unicast. Note: For a detailed description between these NLB options see the articles posted at the end of this article.

The three options available to Windows NLB are: Unicast, Multicast and IGMP Multicast. In short, Unicast is a configuration setting which instructs Network Load Balancing to change the MAC address of the cluster adapters to the same value for all hosts in a cluster. This is the default mode of operation. Multicast is a configuration setting which instructs Network Load Balancing to add a multicast MAC address to the cluster adapters on all hosts in a cluster. The adapters’ existing MAC addresses are not changed. IGMP Multicast is essentially the same as plain Multicast except that the IGMP (Internet Group Membership Protocol) helps eliminate switch flooding apparent with both Unicast and Multicast.

The one additional “issue” with Multicast is that you will most likely need to add a static ARP entry to your distribution switches or routers to map the NLB cluster MAC to each of the NLB cluster IP addresses.

So, how do you find out the MAC address of your network load balanced IP address?

If you are using Unicast, you can do this by issuing the IPCONFIG /ALL command from a command prompt:

Unicast MAC

Unicast MAC

But, what if you are using Multicast or IGMP Multicast? How do you find out what the cluster MAC address is in either of these cases? Hint: IPCONFIG will not help you.

The answer is your friend IP2MAC…

IP2MAC is a option available from the NLB.exe command found in the %SystemRoot%\System32 folder:

IP2MAC

IP2MAC

How do you execute this command?

Open a command prompt and change your folder path to %SystemRoot%\System32 and then type: NLB.exe ip2mac <cluster IP>

So, for example, if your NLB cluster IP address is 10.0.0.254 you would type: NLB.exe ip2mac 10.0.0.254

The results will appear as follows:

IP2MAC results

IP2MAC results

You are now presented with the Unicast, Multicast and IGMP Multicast MAC addresses. You can now give this information to your network administrator so that the static ARP entry can be made to the distribution switches or routers (assuming Multicast or IGMP Multicast).

Interestingly, the Unicast and Multicast MAC addresses are very similar except the high-order octet for Unicast begins with an 02 whereas Multicast begins with an 03. IGMP Multicast is way out there in left field with hardly anything similar.

Using NLB with ISA Server, Part 1: How Network Load Balancing Works

Selecting the Unicast or Multicast Method of Distributing Incoming Requests

Posted in NLB | Leave a Comment »

My Worst Flight

Posted by Mike Howells on April 8, 2011

It was Thanksgiving in 1991 and my family and I agreed that for our family reunion in Florida that I would fly everyone down in a small single-engine four-seater Piper Turbo Arrow.

Piper Turbo Arrow (PA28RT201T)

I had been flying for several years and I had around 200 hours of flight time and I had just obtained my instrument rating.  My instrument rating gave me the ability to fly in inclement weather and it gave me just enough confidence to fly my family from St. Louis, Missouri all the way down to the southern tip of Florida. After making a phone call to FSS (Flight Service) to check on the weather I was assured that the weather was beautiful en route and that I shouldn’t have any problems.

The flight plan originated from The Spirit of Saint Louis airport (SUS) in Chesterfield, MO and included three fuel stops with each leg roughly two hours in length:

Stop #1) Huntsville, Alabama (HSV)
Stop #2) Tallahassee, Florida (TLH)
Stop #3) Regional Southwest airport (RSW) in Ft. Myers, Florida
Stop #4) Marco Island airport (MKY) in Marco Island, Florida.

My brother and his girlfriend were flying in from Los Angeles, which is why we made a short stop-over in Ft. Myers airport to pick them up. This was going to be a red-eye flight so that we could get to Marco Island early and meet-up with everyone in a timely fashion.

Below is a map of our route of flight:

SUS-HSV-TLH-RSW-KMKY

We departed the Spirit of Saint Louis airport just after 10:00 pm on Thursday, November 28, 1991. Shortly after takeoff, we entered IMC (Instrument Meteorological Conditions) at my assigned altitude of 7,000 ft. When we were overflying Paducah the weather conditions cleared. It was an awesome view seeing the full moon through the ragged cloud deck. The remainder of the flight was uneventful and we landed 10 hours later at Regional Southwest airport (RSW) in Ft. Myers, Florida just after 8:00 am the next morning. The most beautiful part of this trip was the flight over the Gulf of Mexico. A sight I will never forget was the sunrise over the Gulf from 7,000 ft on a clear morning with air as smooth as glass. I regret not having a picture of the view.

After spending a few days with our relatives and enjoying what southern Florida had to offer it was time to return home and get everyone back in time for work on Monday. Our return trip from Marco Island, Florida to the Spirit of St. Louis airport was almost identical to our route coming down.

The flight back included three stops:

Stop #1) Tallahassee, Florida (TLH)
Stop #2) Huntsville, Alabama (HSV)
Stop #3) Chesterfield, MO (SUS)

Below is a map of the planned flight:

KMKY-TLH-HSV-SUS

Before departing from Florida I had been making daily phone calls to FSS to keep an eye on the weather. This flight was over 1,000 miles in length so we were crossing a lot of real estate so my attention needed to be on the weather. On Sunday, December 1, I called FSS and they mentioned that a line of weather was making its way down and that I should plan accordingly. FSS would never make any recommendations but they would merely give you the information you needed and then allow you to make your own personal decision on what to do. I could see on radar that precipitation was developing to the northwest of Huntsville but it was several hours away. The weather in St. Louis also looked marginal but more importantly there were no reports of icing.

At this point, I informed my family early that Sunday that we should leave as soon as possible to try to beat the weather.

The one funny thing about departing an airport in southern Florida is that they have to clear the runway of any alligators before you takeoff.

We lifted off from Marco Island at around sunset on Sunday, December 1.

The flight from Marco Island to Tallahassee was uneventful. After refueling I walked over to the Tallahassee FSS, which happened to be located at the airport. I spoke to the FSS person on duty and he showed me the live weather radar. I could see that the weather was approaching Huntsville faster than anticipated. In fact, it was a two hour flight from Tallahassee to Huntsville and the weather was two hours away from Huntsville. I thought to myself that if I put the throttle to the firewall I could beat the weather. I had a real case of “get-there-itis.” In retrospect, if I were more experienced at the time, I would have recognized this human factor in aeronautical decision making but the pressure to get my family home override any apprehension that I was feeling at the time.

So, I took off from Tallahassee and set the Piper Turbo Arrow’s engine power to the maximum 75% allowable instead of the planned 65% power setting. I also had a tailwind so my ground speed was a mind boggling 174 knots (200 mph). At this speed, I am pretty confident I can land before the thunderstorms start rolling into Huntsville. However, during the flight at cruise altitude, I start seeing continuous lightning in front of me on the horizon about 100 miles away. This was not a good sign. The thunderstorms were moving towards the airport faster than predicted, which meant that they were possibly intensifying. Since the airplane was on autopilot I had a lot of time to ponder what I was going to do. I reached into my flight bag and grabbed my sectional and started to look for another airport in case we had to divert.

No more than 5 minutes went by when I looked up from the map to make sure we were still on course when, to my shock, I saw that the VAC annunciator warning light was illuminated on the instrument panel. The vacuum system had failed! At first, I went into this strange state of denial where I thought perhaps that the VAC light wasn’t really on and that it was some sort of optical illusion and that this could not possibly be happening with my family onboard. Maybe the light came on by mistake? Then, my worst fears were confirmed. All of my vacuum instruments including my artificial horizon and heading indicator started to rotate wildly. Now I’m in deep shit…

So, here I find myself in solid instrument conditions in the clouds at night with no instruments to safely navigate except for my lonely magnetic compass and my electronic turn coordinator. Immediately, I called center (air traffic control) and advised them that my vacuum pump had just failed and that I lost all of my main instruments. The controller acknowledged this and asked me to keep him advised. At this point, I did not immediately tell my family what had happened even though they later told me they noticed the warning light was on for a long time and they were wondering what was going on. The last thing I needed was a panicking family distracting me from doing my job (aviate, navigate, communicate).

I briefly considered continuing the flight into Huntsville on partial instruments. But watching the continuous lightning on the horizon made me think twice about that. I knew it was definitely not a good idea to land in instrument conditions with thunderstorms in the vicinity especially in an airplane with partial instruments. I knew I was going to have to do a no-gyro approach and I wanted an airport with good weather conditions.

Because my vacuum pump failed my autopilot was inoperative. I enlisted the help of my sister who was sitting to my right to take the controls in an attempt to keep the wings level for me. But, since we were in instrument conditions this was an extremely dangerous idea. I called the controller and notified him that I no longer wanted to continue to my planned destination to Huntsville, Alabama. I now delegated the task of finding a suitable destination airport to the air traffic controller. The controller then started reading off weather conditions at airports around the region and they all had low cloud ceilings (not good with a partial instrument panel). At this point in the flight I had assumed control of the airplane keeping my focus on keeping the wings level.

I knew I would have to shoot a partial panel ILS (Instrument Landing System) approach which is something I had never done before in actual instrument conditions. All of my training would now be put to use. The controller read the weather conditions at Montgomery, Alabama and it seemed to be the best weather around. At this point I was located around the Columbus, Georgia area so I took up westerly heading to Montgomery (MGM) about 60 miles away. My original flight plan has now been scrapped and I need to get this plane on the ground as safely as possible.

My new route of flight now looks roughly like this:

TLH-roughly near CSG-MGM

Fortunately, there was no turbulence to bounce around my magnetic compass and the Montgomery airport is radar equipped and manned by an air traffic control tower. I was then given no-gyro vectors to the ILS approach at Montgomery. Below is the approach plate for the ILS 28 approach into MGM:

ILS 28@MGM

The flight to Montgomery went well until I was given no-gyro vectors to intercept the inbound course (i.e. “start turn, stop turn, stop!”). Since no-gyro vectors can be extremely inaccurate I was vectored to intercept the outer marker at the outer marker and the controller asked me if that was ok. Controllers are normally supposed to vector you at least 2 miles outside the outer marker. I agreed that it was ok because I didn’t want to spend any more time than necessary up in the air. Since I did not have any primary navigational instruments I had to fly solely by the ILS needle in the cockpit. I chased the needle quite a bit but I broke out of the clouds at about 1,000 ft AGL (Above Ground Level). The runway approach lights were so bright I thought I was going to go blind. The landing was uneventful and we were very fortunate in the fact that the airport had a Piper authorized service center.

The plane was repaired the next day but the weather in St. Louis had deteriorated with freezing rain and low cloud ceilings. We were stranded in Montgomery, Alabama for two days until the weather cleared in St. Louis.

We finally departed on Tuesday, December 3 with one stop-over in Padacuh, Kentucky and finally home to Spirit:

MGM-PAH-SUS

I still wonder to this day what would have happened if my vacuum pump did not fail.

Even though the title of this blog is my worst flight in some ways it was one of my best flights. All of the hard work I put into training for this emergency paid off.

For any student pilots or even experienced pilots you should always keep “get-there-itis” in check. Since becoming a flight instructor I use the following definition and guidelines to help recognize and avoid this syndrome:

Get-there-itis is a tendency that clouds the vision and impairs judgment by causing a fixation on the original goal or destination combined with a total disregard for any alternative course of action.

  1. Duck-under-syndrome, not get-there-itis, is a tendency to “sneak a peek” by descending below minimums during an approach.
  2. Get-there-itis occurs when the pilot has an extremely strong motivation to arrive at the planned destination.
    1. When the motivation to “get there” is strong enough, it overshadows all perceived obstacles to completing the flight as planned.
      1. The pilot does not want to hear about things that would be grounds for delaying or canceling the flight.
      2. The pilot has the ability to recognize a potentially dangerous situation, but chooses not to since the perceived rewards of completing the flight are so great.
      3. This phenomenon is common in general aviation when a pilot is returning to his home base after a weekend cross-country.
        1. The pilot absolutely “must” be back home in time for an important commitment, and en route weather that might have caused him/her to cancel the flight earlier may be ignored on the return leg.
      4. This phenomenon is also common in military pilots who have been separated from their families while on an extended operational deployment.
        1. Aircraft that would have been rejected for any mission during the deployment because of mechanical problems suddenly become perfectly acceptable, since they are the pilot’s only means of early transportation home.
  3. As an instructor, one way for you to help your students avoid get-there-itis is to train them to consider that the flight might not be completed, and plan accordingly in advance. For instance, tell them to schedule an extra day off from work when returning from a cross-country flight, even though they probably will not need it.

Edit: The video which most closely resembles what could have happened to me is this accident case study from the Air Safety Institute: Accident Case Study: Single Point Failure

Posted in Aviation | 1 Comment »

Adding a Child Domain Using Windows Server 2003 vs Windows Server 2008 R2

Posted by Mike Howells on April 5, 2011

If you’ve ever had to add a new domain tree to an existing domain in Active Directory using Windows Server 2003 you may have already realized that you must have DNS configured properly before creating the new child domain. Put another way, if you didn’t know what you were doing you could get into trouble very quickly. With Windows Server 2008 R2 this process is dramatically simplified and the steps for DNS delegation are done for you automatically.

Our example forest is simple with bigfirm.biz representing the forest root domain and ecoast.bigfirm.biz representing the child domain.

The domain controller in bigfirm.biz is bigdog.bigfirm.biz at 192.168.2.130.

The domain controller in ecoast.bigfirm.biz is srv1.ecoast.bigfirm.biz at 192.168.2.131.

If you’ve read any of Mark Minasi’s books you’ll notice that this is the naming convention he uses.

In the below screenshot you can see that I already ran DCPROMO on bigdog.bigfirm.biz and DNS is already configured with the DNS forward lookup zones already populated.

Note: I had the DCPROMO process automatically install and create DNS for me for this process.

DNS on bigdog.bigfirm.biz (Windows 2003)

Now we’re at the point where we want to add the child domain of ecoast.bigfirm.biz to the existing forest root domain of bigfirm.biz.

With Window Server 2003 you must create the DNS domain on the parent before you run DCPROMO on the child domain controller.

Therefore, right-click the bigfirm.biz DNS zone and select the option to create a new domain and then enter the domain name of ecoast. You don’t have to enter any records in ecoast.

DNS on bigdog.bigfirm.biz (Windows 2003)

The next step is to prepare the child domain controller in the child domain.

On srv1.ecoast.bigfirm.biz you need to point its primary DNS server to the parent DNS domain controller (bigdog.bigfirm.biz) at 192.168.2.130. If you screw-up here and point DNS to itself the child domain controller will have no way to get home to the “mothership” and report an error once you try to run DCPROMO.

TCP/IP settings on srv1.ecoast.bigfirm.biz (Windows 2003)

There is another minor but very important procedure that you must also do on the child domain controller (srv1).

You must populate the DNS suffix box with the new domain that you are creating (ecoast.bigfirm.biz). If you don’t do this step then the child domain controller will not populate the DNS records properly at the parent DNS zone.

DNS suffix settings on srv1.ecoast.bigfirm.biz (Windows 2003)

Once all of these procedures have been done you can now run DCPROMO on the child domain controller srv1.ecoast.bigfirm.biz.

Note: Don’t forget to allow dynamic updates on the parent DNS server (bigdog.bigfirm.biz) or else the process will fail. The DCPROMO process should warn you of this.

What I see happen a lot with Windows Server 2003 is that it takes WAY too long for these DNS records to populate at the parent. In fact, it may take upwards of 10-15 minutes or so. Don’t be surprised if you see errors in the system event log on srv1 such as this (see screenshot below). This type of problem usually auto-corrects itself but if it doesn’t you can try opening a command prompt and typing ipconfig/registerdns on srv1 to see if it can help speed up the process.

Event viewer on srv1.ecoast.bigfirm.biz (Windows 2003)

After waiting the aforementioned 10-15 minutes for replication to occur and\or after manually issuing the ipconfig/registerdns command on srv1 the DNS zone on bigdog.bigfirm.biz should now look like this:

DNS on bigdog.bigfirm.biz (Windows 2003)

You’ll notice that DNS is not being hosted on srv1 but is instead being hosted on the parent domain controller bigdog. What if you want to have srv1 host the DNS zone ecoast.bigfirm.biz instead? You can easily do this by a process called DNS delegation. DNS delegation can be a good idea especially if you want to reduce network traffic, provide redundancy and simplify your DNS environment. There is a great KB article on how to create a child domain in Active Directory and delegate the DNS namespace to the child domain. The KB article for this is listed at the end of this article.

From my perspective, the above procedure seems time consuming and laborious. Wouldn’t it be nice if Microsoft improved on this procedure? With Windows Server 2008 R2 your wish has come true. I get the impression that the directory services team at Microsoft took some heat for this procedure on Windows 2003.

For the below example, everything remains the same except we are now using Windows Server 2008 R2 as our operating system.

After running DCPROMO on bigdog in our forest root domain bigfirm.biz our DNS zone looks like this:

DNS on bigdog.bigfirm.biz (Windows 2008 R2)

Now, here is where things get super cool. Remember all of the steps that we went through to prepare our DNS environment before we could even introduce a new child domain into the mix?

Well, prepare to be amazed.

As before with our Windows 2003 example, on srv1 make sure that you point the primary DNS server to the parent DNS server (bigdog.bigfirm.biz).

TCP/IP settings on srv1.ecoast.bigfirm.biz (Windows 2008 R2)

Once you do that all you have to do now is run DCPROMO on srv1!

One thing I like about the new DCPROMO with Windows Server 2008 R2 is that it automatically checks and detects that there is no DNS server authoritative for the ecoast.bigfirm.biz domain. Therefore, because it could not find an existing DNS server authoritative for ecoast.bigfirm.biz it will automatically create a DNS delegation for you. Brilliant!

Below you can see in the DCPROMO summary screen that it will automatically create the DNS delegation for you since you did not pre-create the ecoast.bigfirm.biz domain on the parent server.

Below is a screenshot of what the bigfirm.biz DNS zone looks like on bigdog.bigfirm.biz after the DCPROMO process completes on srv1.

Notice that ecoast is greyed-out indicating that the zone is now delegated.

Delegated DNS on bigdog.bigfirm.biz (Windows 2008 R2)

After logging into srv1, DNS was installed automatically and the ecoast.bigfirm.biz DNS zone was created and populated with all of the DNS records. No errors in the event log and everything just works and works immediately.

DNS on srv1.ecoast.bigfirm.biz (Windows 2008 R2)

They say the devil’s in the details and Window Server 2008 R2 does not disappoint. Below you can see that the DCPROMO process automatically adjust the primary DNS server on srv1 to itself and points its secondary DNS server to its parent DNS server.

TCP/IP settings on srv1.ecoast.bigfirm.biz (Windows 2008 R2)

One final note I should mention is that it is no longer required to populate the DNS suffix on the child domain controller srv1 as we were required to do with Windows Server 2003.

How To Create a Child Domain in Active Directory and Delegate the DNS Namespace to the Child Domain
http://support.microsoft.com/kb/255248

Posted in Active Directory & DNS | 6 Comments »

Reading a Remote Registry Key Through Scripting

Posted by Mike Howells on March 19, 2011

I’ve been working a lot lately with SCCM DCM (System Center Configuration Manager Desired Configuration Manager).

If you’ve worked with ConfigMgr you know how powerful the tool is. The DCM portion of ConfigMgr is particularly powerful when scanning collections for compliance against a set of baselines (comprised of configuration items).

The one thing that you quickly realize with either ConfigMgr or DCM is that you need to script a lot of stuff to get what you want. DCM will allow you to use three different scripting frameworks: PowerShell, JScript, or VBScript. For my situation, PowerShell is not an option because the target servers must have PowerShell installed, which is not a guarantee. So, I chose VBScript.

One of the scans we are performing is to check for the existence of a registry key and key value. The registry entry is the following:

HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\IniFileMapping\Autorun.inf
Value: (Default)=@SYS:DoesNotExist

The above registry key and value is one of those Windows Secrets that prevents AutoRun attacks.

Note: Details of the AutoRun attack and how to prevent it is listed at the end of this article.

Reading the local registry via scripting is relatively straightforward. Using the WshShell object’s RegRead() method you can display the value located in the above registry hive by running the following VBScript.

Set ObjWshObject = WScript.CreateObject(“WScript.Shell”)
strResults = objWshObject.RegRead(“HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\IniFileMapping\Autorun.inf”)
WScript.echo strResults

If you’re wondering how to run the above script you can take the above green text and paste it into Notepad (or my favorite Notepad++) and save as RegRead.vbs and then execute it by double-clicking the vbs file.

Note: If this registry hive does not exist when you run the script you will receive an error.

The question now becomes how do you run this script against a remote system to see if this registry value exists? It’s too bad we can’t just wave our magic remote wand and make VBScript magically do this. If only it were that easy…

To get VBScript to work remotely you have to invoke WMI (Windows Management Interface). WMI is a massive topic and way beyond the scope of this article. Suffice it to say it is the magic you will invoke to gain access to remote stuff.

The first problem in trying to execute the above script against a remote system is that we are interrogating the registry for subkeys when we actually want the default value of the key.  The second problem is that WMI’s StdRegProv (WMI interface for remote registry access) is really hard to use.  It is full of all kinds of pitfalls because the results depend upon the type of data found in the registry.  For example, if the default value is not set it only returns a scalar value (single value) as opposed to returning an array (multiple values). Also, you need to determine the value’s data type before you can read that value.  That is to say every value type requires a different method to extract its value. Wow that just turned difficult fast. Microsoft should examine this portion of StdRegProv because it is unreasonably complicated. However, my belief is that Microsoft’s focus is more on PowerShell as a solution as opposed to using VBScript so it is what it is…

In our example, life is a bit easier because we already know that the default value is going to be a string value.

Cobbling all of this extraneous information into a script to gain what we need looks like this:

const HKEY_LOCAL_MACHINE = &H80000002

ExistOrNot = “Key does not exists”

strComputer = “.”

Set objReg=GetObject(“winmgmts:{impersonationLevel=impersonate}!\\” & strComputer & “\root\default:StdRegProv”)

strKeyPath= “SOFTWARE\Microsoft\Windows NT\CurrentVersion\IniFileMapping\Autorun.inf”

objReg.GetStringValue HKEY_LOCAL_MACHINE, strKeyPath, “”, strValue
if instr(strValue, “@SYS:DoesNotExist”) <> 0 then
ExistOrNot = “Key exists”
else
ExistOrNot = “Key does not exist”
end if
WScript.echo ExistOrNot

Without going into the gory details of the script the magic really happens in the last section when we issue the following function : if instr(strValue,”@SYS:DoesNotExist”) <> 0.

The Function will return the position of the first occurence of @SYS:DoesNotExist within the variable strValue. The Function will return a value of zero if @SYS:DoesNotExist is not found. Therefore, if the value returned is not zero then our key exists as our script shows above.

Scripting is akin to playing an instrument. Just like in anything the more you do it the better you become at it.

One quick trick prevents AutoRun attacks
http://windowssecrets.com/2007/11/08/02-One-quick-trick-prevents-Autorun-attacks

Notepad++
http://notepad-plus-plus.org/

Microsoft Script Center
http://technet.microsoft.com/en-us/scriptcenter/bb410849.aspx

Posted in Scripting | 3 Comments »

Using Sysinternals’ Process Monitor to Troubleshoot a Known Unknown

Posted by Mike Howells on March 13, 2011

I was recently tasked to determine why the ASP.NET State Service would not start on a Windows 2003 Terminal Server. All I had to go by was the error message, “Error 5: Access is denied.”

Not a lot to go on...

In addition to the above error message a cryptic Event 532 was being logged in the security log of event viewer.

Asphinctersayswhat?

According to Microsoft the ASP.NET State Service provides support for out-of-process session states for ASP. ASP has a concept of session state. If this service is stopped or disabled, out of process requests will not be processed and subsequently the developers using this Terminal Server for their development work are out of business.

Ok, now what? As Donald Rumsfeld would say, “We also know there are known unknowns; that is to say we know there are some things we do not know….”

Researching either “Error 5: Access is denied” or “Event ID 532” yielded no useful results and in some cases pointed you in completely the wrong direction.

I recently watched Mark Russinovich’s on-line video titled, “Case of the Unexplained 2010,” which is an excellent tutorial on how to use the Sysinternals utility Process Monitor.

Note: Video of this webcast is listed at the end of this article.

So, what better time to put this knowledge to use and find out what is going on underneath the hood by firing-up Process Monitor.

Note: A link to the download for Sysinternals is at the end of this article.

After opening Process Monitor the first thing I did was reduce the noise by including only services.exe. After scrolling through the many results I finally hit paydirt when I saw “ACCESS DENIED” in the results column.

You can run but you can't hide from Process Monitor...

Ok, now we’re getting somewhere…

You can see in the above screenshot that the QueryOpen operation on aspnet_state.exe is successful but as soon as the operating system attempts the CreateFile operation it fails with the access denied error message.

I then opened Windows Explorer and saw that someone did something that they should not have done. A user modified the NTFS file permissions on the aspnet_state.exe file from its default permissions. You can see from the below screenshot that the user not only modified NTFS file permissions but he prevented inheritable permissions from the parent folder.  Not good…

User = FAIL

This was quickly remedied by enabling inheritable permissions from the parent folder.

I then opened Services.msc and I was able to successfully start the ASP.NET State Service.

A mechanic is only as good as the tools he has at his disposal. The Sysinternals Suite is one of those must-have tools in any IT admins toolbox.

Incidentally, I e-mailed Mark Russinovich and he will be including this in his future Case of the Unexplained presentations and in his new Sysinternals book that he is co-authoring.

Sysinternals Suite
http://technet.microsoft.com/en-us/sysinternals/bb842062

Case of the Unexplained 2010 – Mark Russinovich
http://www.msteched.com/2010/NorthAmerica/WCL315

Mark Russinovich’s Blog
http://blogs.technet.com/b/markrussinovich/

Posted in Sysinternals | 2 Comments »

Running a Command Prompt as NT AUTHORITY\SYSTEM

Posted by Mike Howells on February 12, 2011

I recently ran into a situation where I was using the SysInternals tool ProcDump to write a dump file to be examined for a memory leak.

The problem started when trying to run ProcDump against the process oracle.exe. The error message was “Access denied.”

I was an administrator on the server so how could I become more powerful than an administrator?

The answer comes in the form of opening a command prompt as NT AUTHORITY\SYSTEM, which will then grant us the authority to access the oracle.exe process to create a dump file.

The first step is to download the Sysinternals tool PsExec from the below URL:

http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx

Extract PsTools.zip to a folder on your hard disk.

Launch a command prompt as administrator (right-click the command prompt shortcut):

In the command prompt navigate to the folder containing the PsTools.zip extracted data.

We will now launch PsExec.exe with the -i and -s switches to launch the program interactively using Local System.

psexec.exe -i -s %SystemRoot%\system32\cmd.exe

Type whoami at the newly opened command prompt and you will see that you are now running as NT AUTHORITY\SYSTEM:

You can now execute ProcDump against the process that you were previously denied access to and complete your work.

Note: If your system does not have whoami.exe, you can typically find this program as a separate download via the resource kit or support tools appropriate to your Microsoft operating system.

Posted in Administration | 7 Comments »

My Vonage Experience

Posted by Mike Howells on December 25, 2010

In early December 2010, I decided to give Vonage a try. Why would I want to switch from a service that I had been using since 1994 and had essentially treated me with no problems? The simple answer is the outrageous amount that AT&T charges for their service. Below is a screenshot of my local services plan from AT&T. The monthly service costs $54.03, which includes local and unlimited long distance. The surcharges and other fees totaled $10.95\month! This is more than the entire monthly fee for Vonage’s Lite service! Quite frankly I was tired of paying a hefty service, which I barely used.

AT&T Monthly Service

So, on December 6, 2010 I ordered Vonage’s Lite service. Their website was straightforward and easy to use and the ordering process was painless.

In the meantime, I had brushed-up on some Vonage forums looking at other issues people were having but dismissed those problems as I was confident that those problems would not affect me.

A few days later the Vonage hardware arrived and I proceeded to setup the service. The instructions wanted me to replace my DSL modem\router with their Vonage router (model VDV22). This was not possible since my AT&T 2Wire DSL router uses an RJ-11 jack that goes from the wall to the router. The Vonage router uses RJ-45 jacks only so it was not compatible with my setup. No problem. I’ll just setup my Vonage router behind my AT&T router, which many others have done with success.

I then connected the Vonage blue port (WAN) and the Vonage yellow port (LAN) to my switch. Since my computer has two LAN ports I’ll just connect both of those ports to the switch and configure Vonage via this method. After I got everything connected and I logged into the Vonage web page (192.168.15.1) and I could see that the status page kept saying, “Could not connect to configuration server.” I’ll spare you the gory details but after a call into Vonage tech support they determined that the unit was faulty. So, I spent the $11 to ship the unit back to Vonage and wait a few more days for a replacement unit to arrive.

I connect the replacement unit to my switched environment using the same configuration I used for the first Vonage device. I logged into the Vonage web page and the damn thing is giving me the same error (Could not connect to configuration server). At this point, I was pretty much ruling out another hardware error. The odds were stacked against that probability. It had to do something with the way I was connecting my Vonage device to my network. Instead of connecting both of the Vonage device’s ports to my network I decided to connect just the WAN (blue) to my switch. Sure enough the device worked. Nowhere in the documentation did it say that you could not have both the WAN and LAN ports connected to the same VLAN, which was causing some sort of conflict with the Vonage device looping-back to itself. One strike against Vonage.

Now that I have the device up and running I can begin my testing. At first the device seemed to work properly with an occasional audio dropout, which I didn’t think much of at the time. It wasn’t until I made a dentist appointment where the call audio was completely cutting out every 30 seconds or so. The audio dropout was consistent and pervasive. I opened a command prompt and initiated a persistent ping to the Vonage device to see what kind of latency times it was experiencing (see screenshot below).

Vonage audio dropouts

These are some of the highest latency times that I’ve ever seen to a single device. Even my wireless aircard from Sprint has latencies better than this and this Vonage device was sitting on my gigabit switched network! I would confirm that the observed latency coincided with the audio dropouts by making test calls and seeing the high latencies associated with complete audio cutouts. Since there was no network traffic on my LAN that could be attributed to this latency it felt like a bug in the Vonage device. So, I power cycled the device and the problem went away. Strike two against Vonage.

I was hoping that this was an isolated case and that Vonage would somehow start working better. As I would see this was not to be the case.

To continue my testing I forwarded my AT&T home phone number to my new Vonage number so that I could start real world testing. Every single call that I received or placed would experience audio dropouts during the call. This was amazing to me since my DSL service is 6 Mbps downstream and 768 Kbps upstream plenty enough to handle VoIP service. In addition, I had just changed my AT&T DSL profile from Interleaved to FastPath, which reduced my latency to my DSLAM from 47 ms to 9 ms making a huge difference in Internet response times. My Internet circuit is also barely utilized so the notion that my circuit was somehow saturated is not a valid argument.

I became so frustrated that I downloaded and installed a trial version of Solarwinds’ ipMonitor. I was familiar with ipMonitor as I used it for years in my capacity as a data center engineer in a hosted environment.  I setup a monitoring session in ipMonitor so that it would send a ping to the Vonage device every second and e-mail me a daily report of the results. What I found was astonishing. For a device that is sitting idle most of the time I was seeing ping response times approaching 100 ms or more. Below is a screenshot of ipMonitor monitoring the device. The second row shows the tale of the tape. The large blue mountains show the latency to the device for which I had no explanation for. The only thing I could think of is that the device was so busy doing something. The problem though is that it wasn’t busy doing anything. No phone calls were made or received during this monitoring time. Strike three against Vonage.

VDV22 response time

It was time to cancel the service. I had spent enough time and energy troubleshooting this issue that I decided to be done with it. The Vonage account management rep on the phone tried to send out a technician to help me get it installed but I don’t see how a technician could resolve high ping times to the device, which had absolutely nothing to do with the way it was implemented. So I declined and the service was canceled on December 23, 2010.

The only other VoIP device out there that gets my attention is Ooma. The downside to this device is that you have to purchase it for a one-time cost of $199 and there is no trial period so once you buy it you’re stuck with it.

I may revisit Vonage in another year to see if they’ve gotten any better. For now, I’ll stick with the high-priced oversubscribed service that I barely use knowing that I’ll have good call quality.

The old adage of you get what you pay for rings true in this case…

Posted in VoIP | 2 Comments »