Latest Entries »

Almost as soon as AirPrint became available we all wanted to print from our iOS devices.

First we tried an application which turns any Mac into an iOS print server using printer sharing. That application works well for my home needs, but found that with more than a few printers shared, I had to regularly reboot the Mac. It seemed to have some trouble switching between printers with multiple users.

Still several churches seem to be doing well using Mac applications like Printopia or FingerPrint.

We decided to try the Lantronix xPrintServer for about $150. This is a stand alone device that says it will automatically detect all of your network printers and auto configure itself for you. It uses UNIX CUPS, which is pretty stable. Upon reviewing the web site We learned that this device could support most of our printers, even some older ones.

It is an impressive looking device. It isn’t a whole lot larger than an iPhone 4. Of course it didn’t automatically find all of our printers either. The documentation it comes with isn’t very clear, but it has a way for it to help you find its web GUI and IP address (it uses DHCP).

Before you plug it in to anything start with this:

  • Be sure you are plugged into a port that is on a subnet that has printers.
  • Lantronix technical support confirmed our assumption that it the xPrintServer does not currently have a way to auto-detect or configure a proxy.
  • If you have a proxy, you need to know the IP address of the device so you can open up port 80 for that address in your firewall to allow it to download the latest drivers from the web site. It only needs to do this once, then you can remove that rule.
  • Once it has configured the printers on that subnet, you can move the device to all the subnets your printers are on and have it rescan until you have the ones you want.
  • Once you have your printers you need to move it to the wifi VLAN that your iOS devices use.

Some other things to note is that you will want to test print to those printers because some of the drivers don’t work properly. I suspect it still needs postscript printing, which aren’t supported by all printers.

Another thing to consider is that like the printer sharing software we tried first, the xPrint Server can handle only one print job at a time. It does seem to switch between printers well, but someday we could possibly need more xPrint Servers. Their web site says to have one for each subnet, but as I’ve noted above we learned that isn’t true, it just isn’t automatic.

Paul, our engineer from BEMA Information Technologies helped me figure most of this out since I got stuck after getting to the web GUI.  He was able to speak geek to Lantronix technical support.

This is the third post about our recent Domain Migration.  If you happened upon this without reading the first two posts, you can start at this link.

Monday, supporting users in the new Domain…

Three of our volunteers showed up bright and early to prepare for any fallout that we encountered.  Dustin from Solerant arrived while I picked up Kolaches (man-food type pasteries) and began the process to move our Exchange 2010 server to the new domain.  Unfortunately, we tried to help him by spinning up a new server and installing Exchange Server which he had to delete and start over.  I should have just spun up and patched a brand new server and stopped there.

Resulting Challenges

  • We had some had hiccups with the data conversion of some LUN’s, and because we later learned that our SAN switches were not correctly configured and Jumbo Frames aren’t enabled, which we will do during our next SAN firmware update.
  • Since the file server was down, the GPO that pushed the RemoteApps were not able to install the ACS RemoteApps until we moved the pointer to another location—which makes more sense in the end. We now share the default location where the Microsoft Packager will create the package.
  • We had some issues with Group Policiesin the new domain. Perhaps they are default policies designed for enhanced security but they did create some issues for us. Among them,
    • Our Windows users still need to be local administrators, but a GPO was preventing that.
    • We have two wide format printers that we do not include in our Print Server since one of them we know did not do well over the network.  The older one was in use all the time, but went out on them while we were making these changes.  The newer one would not print at all, which was odd.  We had not printed to it since July so it was hard to pin down what the problem was.  It turns out the jobs would be deleted almost as soon as it entered the print monitor to print.  The printer tech told me it was a switch, but somehow I guessed it was a GPO, and it was a radio button that wouldn’t allow printing to other printers.
    • The Phone Server caused us some problems because of our own Group Policy removing local administrator rights when we removed the server from the domain initially.   We didn’t notice anything really until we broke the Trust between Domains and shut the old domain on Friday after email successfully moved.  Once the trust was broken, that is when voicemail stopped forwarding to email, which everyone missed right away.  One of my volunteers worked on this the following Sunday, and broke voicemail delivery entirely which I didn’t notice until a few days later.  It turns out that the phone server needed the service accounts to have local administrator rights as well as a new backup software that had been installed on that server that was also interfering.  Voicemail will deliver now and it will also forward to gmail, so it is only a matter of time until we figure out how to get it to forward to our email.  This server uses its own SMTP to handle email.  I left for vacation before figuring this last part out.  I don’t understand phone systems at all, so am thankful that I have enough troubleshooting skills to get as far as I did.
    • The copiers scan to folder function quit working because we forgot to migrate that AD service account and I also fixed the DNS and domain name pointers that included one of the old Domain controllers.
    • End users handled everything with much grace!  Many of them needed help with email profiles being removed and re-added through the control panel.  Group policy didn’t always lay down the first time the user logged in, but helped them quickly with the gpupdate /force command.  Vista computers kept both sets of printers, from the old domain and the new domain.  Since I only have a few, I just deleted the old printers.
    • There was also a day when Exchange had a few services quit on us, but that hasn’t occurred since.  That distressed a few users that Friday, but we were able to rectify that with Solerant’s help rather quickly.
    • There were some changes we needed to make with our Proxy since it could resolve incorrectly off site.  Once we made the change we were able to help users with laptops and the Mac users.
    • Exchange didn’t get completely migrated until the 4th day due to problems.  I was able to offer workarounds for users via webmail for Monday and Tuesday, and our list serve or Constant Contact for mass mail. Dustin’s problems began because we migrated user accounts to the new Domain and included the Exchange attributes.  When he did get to where he could move mailboxes several kept “failing”.  Perhaps it was because we had the SAN switches configured incorrectly and it was taking too long to move mailboxes.  Dustin did get help from some of his colleagues and some answers from Microsoft.  In the end all of the “failed” mailboxes were there so nobody lost any email.  There was the oddity of “duplicate” users which he deleted using the Exchange the Command Shell.  One of my volunteers and I were able to work as a team with Dustin on Thursday morning to reconnect each user with their mailbox and release our spooled mail from SpamSoap.  No incoming mail was lost during that time.  It was a wonderful sound to hear people’s phones and Outlook begin chiming that their mail was being delivered.
    • With all the changes we made, we had some issues supporting Windows XP.  Rather than make the changes we would need to make in security to help them, we will help them upgrade to Windows 7.  Microsoft XP has a different way of handling Kerberos that is less secure.  For instance, drives would map, but they are unable to get to the network shares.

Problems caused because Microsoft Exchange was moved last

The ADMT was perfect for migrating users and some computers, but it should not have been used to move users before the Exchange Server was ready.  I wish there had been more lead time for the project so this could have happened better.  The other truth is that this is something not often done and not many systems engineers will ever do this kind of project.  We made it through and overall it went well.  I just like to look for how things could have been better so that with whatever next project we complete we can find more potential problems ahead of time.

  • Because we moved users and computers first AND we migrated Exchange attributes, Dustin had to remove those attributes and re-migrate all of those users a second time.
  • While Exchange was migrating, the duplicate users in the new domain REALLY confused the Macs which had been working beautifully.  Mac users were unable to login at all if they had rebooted from the night before.  If they had not been shut down or if they had been off site, they could login and work.  I was able to pinpoint the problem’s relation to the second user account and my Solerant tech was able find a temporary workaround for users by using an account that didn’t have a mailbox attached to it.
  • The Remote Desktop Servers all had triple profiles for users (one for the .local Domain and two for the new).  I suspect this may be related to the same issue above compounded by the old domain.
  • This prevented successful launching of Outlook and autoconfiguring their mail to those profiles.  My solution here was to just remove all of the user profiles from those servers and let them start over to prevent less intervention on my part.  Some users with access to different tools still needed a little assistance, but this wasn’t a horrible problem, just a little time.

The Success we have seen already

Something we noticed immediately is how quickly a Windows user can get from the login screen to their desktop.  GPO’s apply very quickly once the user has a profile on that machine and I really like the new drive mapping GPO.

Mac users immediately noticed a difference.  Many Mac users have told me how much faster login is. That first week the Mac user with the oldest laptop told me how he loved how quickly he could login, both on or off site.  The new Mac OS X share connecting login script also applies quickly.

All of our Group Policy objects apply whether we like it or not.  Okay, that last part was for humor since we told it to do it to begin with.  This was great for cleaning up our Policies and simplifying things the best we can, a thing difficult to accomplish the larger and more complex we get.

More of my volunteers have a thorough understanding of what they support apart from the networks they work with for their full-time jobs.  I hope this helps us as we grow into more campuses and provide the technology needed to continue to further our mission.

Really, this project went well overall.  To be able to find the problems in spite of all of the changes that occurred at the same time is truly miraculous, I believe. I am thankful for my volunteers who helped:  Kevin Creason, Michael Huset, Paul Salvo, who spent the most time preparing for migration, day of migration, and helped with fallout afterward.  I am also thankful for Dustin Corkern, the Solerant Engineer who stuck with the email migration until it was done.  Thanks also to Paul Rhodes, Donald Cook and Jeff Ammerman who were able to help when they could before and on the day of migration.

This is the second blog post discussing the change from a .local Domain to another namespace in order to better support the Mac clients in our Domain.

 Planning for the new Domain

The initial decision to move forward with the Domain change was easy compared to what was next.  The next challenge was how and when based on our people resources, major church-wide events, and launching of our 3rd campus.  As far as the life of Clear Creek goes, there are not any very “slow” times in which a project like this would be easy.  The next consideration was the when and how to do this to get the most of our volunteer resources and asking Solerant (IT Contractor).  This is important because while I have some technical skills, I’m really more of a project or resource manager.

  • We decided to deploy on October 9 & 10 because we could start after church services on Sunday and several of our volunteer team could also be present to help on Monday because they had Columbus Day off.
  • We also decided that migrating the Microsoft Exchange Server (E-mail) was going to probably be the most challenging part of the migration and pose the most problems.  I thought that Solerant would be best to do this because they already handle many Microsoft Exchange upgrades each year and that they could handle the added complications that went with a Cross-Domain migration of Exchange.
  • I polled the staff to see if making a huge change like this would seriously impair ministry.  I did my best to explain that everything was going to change, but I don’t know if there is really a way to explain just how much was going to change.  I tried to think of “what the worst case scenario” implications would be to best prepare them.

Once I made the case with my boss and laid out the basic plan we moved forward.  The team normally meets on Monday nights, but over the next few weeks we also met an additional night in order to account for everything that would need to be done.  Sunday the 9th would be the day that the most manpower was needed, so I wanted us to be ready to roll since that was going to be a long day.

Thankfully, in our infrastructure we have an Equallogic SAN and Virtual environment using VMWare.  If we had not had this environment, it would have cost a whole lot more to change our domain at the size we are now. I think we had to add five virtual machines to complete the migration.

Creating a New Domain

The first thing we did was create a new Domain.  We “spun up” a new server in our virtual environment and made it the new Domain Controller.  We then needed to create a “trust” between the new domain and the existing domain.  In order to do that we had to remove the .org from the existing domain’s forward lookup DNS records and setup forwarders to the new Domain’s server.

Extending the Active Directory Schema

This time around, we decided that maybe the Magic Triangle was causing too much delay for the Macs to find resources and that maybe extending the Schema in AD was a better option.  We used Apple’s white paper to change 40 attributes necessary to support Macs in the Domain.  To me, this sounds like a lot, but now I understand that this isn’t in the big picture world of a Domain.

Valuable links:  Apple’s KB and a good blog post, but note that the Dynamic UID problem is not true anymore with a recent patched OSX.  This is how to modify the schema, and the thing we need to do still for applying WGM policies to computer list.

Mimic Necessary Foundation Services

So now we had a domain consisting of a single machine that could talk to the other domain and we could find resources between them.  Now we begin making duplicates of resources that we won’t migrate

  • We created a certificate authority with Web certificate component
  • Network Policy Server (NPS) in order to prepare the way to move our existing Wifi system.
  • We cloned the Print Server and added it to the new Domain which saved huge amount of time with adding drivers and creating shares.
  • Evaluated/Exported all of our Group Policy Objects (GPO) to determine which ones needed to come over, which ones weren’t working, and how we could simplify.
  • Evaluated our Active Directory structure since we had a lot of legacy structure that was not clear, possibly structure from Small Business Server a long time ago.
  • Decided that we would eliminate whatever XP support we could and get the rest of our machines to Windows 7 where possible.  We are currently down to 3 machines with Windows XP with plans to get them out and 5 Vista computers which we will be able to image very soon.
  • Created a new Windows 7 VM and installed the Active Directory Migration Tool (ADMT) from Microsoft on another virtual server joined to new domain.
  • Decided we would migrate Microsoft Exchange as the last step since they were unable to do it beforehand and I only had a short window for my volunteers to help.  In hindsight, it would have probably been best to migrate Exchange FIRST, so that we could migrate the users with their Exchange attributes.
  • We wiped our physical Domain controller and loaded it with Server 2008r2 to match all of our other servers the week before the migration.
  • The morning of our migration, demoted another Domain controller in our .local domain so that we could keep the new Domain pointing to similar DNS to avoid problems with more devices.
  • We made sure we had a second virtual Domain controller on our other Virtual Host server so that we would be prepared if we had any power issues.  We agreed that it would not be a good idea to move a domain controller from one domain to another. It never seems like those things demote properly in my experience. This added a little bit of extra work with re-creating the DHCP server, the options, and his subnets.
  • Joined some test machines as clean, new clients to the new domain including a some Macs: Lion Mini, a Snow Leopard mini, and a Lion Server Mini.  We wanted to get things right with Lion from the start.

What would end user experience during migration?

Since I was not able to quickly find information online about what a user would expect when all of this occurred, I put myself in their shoes in order to make this go for them as smoothly as possible.  My team was busy working on the major aspects of the migration, but they really aren’t wired to think about how the regular person who just wants their computer to do its job might experience with EVERYTHING changing.  I started off with what I know…which is when you add a computer to a domain, there is no user profile.  My assumption became that if you took the same computer from one domain and moved it to another, they would likely have a brand new profile and not see the same desktop as they did the last time they logged in.

  • For most users, this would mean that they would essentially not have anything in “My Documents” or any of their personal settings or see anything they kept on their desktop.
  • Laptops and Macs were going to need extra attention.  The people who use those computers have very specific needs so more is on their specific local machines and thought it would be important to migrate their user data.  We needed tools or scripts for both platforms.  We chose ProfWiz for Windows machines and used these magic commands for Macs.
  • I also knew that because of email, they would essentially be operating as two users – one for email from the existing domain and their other account in the new domain.   I also realized that the regular user was not going to see the distinction.
  • How do I communicate with everyone in a way that doesn’t alienate or frustrate?  I decided to prepare via staff wide email and personal conversations to make sure things were clear to key people.  Then I chose to pick a person in various ministry areas that I could contact to update with the latest information should email be down or we had any major problems.  I could call or text them where to find instructions for them to forward to the people in their area.

How I prepared everyone for the migration

  • I sent out emails to staff asking them to make sure all of their documents were living on our network drives.  We’ve done a pretty good job of having users understand where to store their data.  We choose not to redirect profiles since there has been benefit for many users to put some data on their local computer that can be discarded.
  • People also forget the small things like how to set a default printer or how to export and import their browsing Favorites.  I sent instructions for both in emails ahead of the migration and made sure key people knew how to do those things or print them out so they could help each other.
  • I also sent out instructions to remind users how to report problems by using Solerant’s ticket system or to leave me voicemail message.
  • I made sure that laptop users verified that their data was on the network and for Mac users we made sure they had a full backup in case we had a problem and were not able to help them as quickly as we could.  I’m all for having a backup of your backup.
  • The users also needed to know that they would need to use Webmail (OWA) or email from their cell phones on Monday, until we completed the migration of mail.
  • I made sure I thought about all those “what if’s” on things failing.  My goal?  Have a backup plan to support the end user so they can get their job done no matter what.  We can’t afford for them to not be able to get their job done, nor can we afford to lose credibility with the users because the change was so painful.
  • There isn’t a whole lot on the actual Domain creation and migration that I can do, so I made sure my spreadsheet for all of the servers with their roles was updated and questioned how each one of them would be affected during this change.
  • I created a map of where every machine was physically located and cleaned up the current Active Directory to remove any computers and users that were no longer in service.
  • There were a few other obscure things to remind the team of, making sure the SSO for our RemoteApps were accounted for, KMS setup in the new domain, making sure any references to the Fully Qualified Domain Names (FQDN) references to the old Domain were accounted for as well.
  • During our tests we noticed that our current Domain policy was actually working in that it removed any local user privileges with local administrator rights and made them “guest” users while in the Domain.  To bypass this, I had to make sure my team had good instructions moving from computer to computer on the day of our migration.
  • I also made our Resources team (DVD/CD production of messages) and our Database Reporting Team aware. Both of those teams do not have a staff person in the office during the week, so I needed them to think about how this would affect them and be willing to test to avoid technical issues when they meet or when Sunday comes.


How long did it take?

This is rather tricky to answer, but since I have several people with a little amount of time, I believe it went as well as could be expected with some parts better than others.  It took about three calendar weeks to prepare, one day to move all of the users and computers, and three days to get email moved.  Again, hindsight says we should have moved email first and I still cannot guarantee you that it will be easier.

Moving users and computers

This was a very long day.  We had a good lunch and started after the last church service.  By this time we had made sure we had all of our new infrastructure in place.  I’d already re-imaged some user’s computers during the morning services.  My thought here was that since they were going to have to essentially start over, I may as well give them Windows 7 as well so they don’t have to start over twice within a few months.  There were only a handful of machines I was unable to take care of due to the software or hardware they were running.

  • Six of my volunteers (and me) helped on Sunday and which was kind of tedious process, but these guys were very good.  We worked from 1:30 pm until around 11:00 p.m
  • To start, I had three people working on servers/user migration and the other three working on clients.  It seemed I was more useful answering questions, but I did get to move a few clients.
  • One of those guys made sure we migrated our NOD32 (antivirus) so that clients would see the new server.
  • We also verified DNS for some other servers that were not in the Domain so they would work well after we were done.
  • We moved Ruckus (our Wifi system) pointers to the new Domain NPS (Network Policy Server).
  • We used the ADMT tool to migrate users, had them keep the same Security Identifier (SID), and migrated the password.  The latter created angst with my security-minded volunteers, but I really needed to protect the end user here to prevent distress.  After they migrated the users, I went through each user account and unchecked the box that requires them to change the password and removed the login script since we now push the drive mappings via Group Policy.  I have to say this GPO is fast and beautiful compared to a login script!
  • We moved some servers and workstations using the ADMT tool, but in hindsight I’m not sure that was a good idea for the phone or Remote Desktop Servers.
  • We used ADMT to migrate the file server.  I believe it was very important to move this server this way because it migrated permissions and security groups as well.  Some of folders appear to have had a hiccup and did not duplicate properly, but fixing permissions for a few directories was not as bad as having to start over.   We also consolidated some of our drive mappings to make it easier for our Mac users since they have to browse differently to get to the same shares we make appear magically for the Windows users.
  • Our Church Management system is ACS and we run it via Microsoft’s RemoteApp Remote Desktop Services option  from a Server 2008r2 server.  I created new RemoteApp packages for People Suite and Financials so they could be applied via GPO.
  • Once the servers were done, two of the volunteers started on laptop users, most of which had left their laptops in their offices (our request) to allow us to help them before Monday.  We used a tool called profwiz to migrate user profiles on Windows machines and some magic commands on the Macs to move their profiles.  I wish there was plenty of time to do this for every user, but we just had too many computers.

When we finished, we called it a night.  We had to convert some data LUN’s overnight so that we could completely move to new Veeam backup software (designed for virtual environments).  We left that running while we slept.

Final Blog Post in this series will cover:

Domain Migration:  Challenges & Successes

Domain Migration

This is the first of a series of lengthy blog posts on our recent Domain migration.

About 2 ½ years ago Clear Creek Community church began integrating Macs into our Domain to provide the Arts team with our network resources to attempt to make them equal clients to our resources. The goal was to make our IT resources on mission.

Historical reference and mission orientation

To understand why we did this would require us to go back to the whole church mission and strategy:

  • Lead unchurched people to become fully devoted followers of Christ.
  • Provide multiple campuses of our church body from the Beltway to the Beach and from the Bay to Brazoria County.

My job is to constantly focus on how to accomplish our mission and function like one church body by providing the best technology support to those who directly accomplish the mission.

Clear Creek has grown at a rapid pace which is fantastic for the mission! But, it becomes more challenging to provide for the needs of the staff and volunteers who accomplish our mission.  People need to get their job done in a short amount of time. This is true for everyone in any type of work in general but we feel it more strongly since the majority of our staff are part time or gifted volunteers with a heart for the mission. This growth also highlights how important it is for the team to be unified and cohesive. I strongly felt the need for the Technical and Creative Arts teams to be able to bring their resources together into the same cohesive integration and organization rather than leaving them outside to fend and manage for themselves.

It would be a change in operations for them but I could see how it would help make the team more cohesive and integrated on mission. They also liked the idea of organization and integration. The people hurdle done the technology hurdle remains. Computers just do what they told (no matter what we think when they seemingly misbehave) so we just need to tell them what to do…

Integrating Windows and Apples

Plenty of organizations have integrated the Apple Mac OS X systems into their corporate environment with success, just as many have attempted and failed. My volunteer base is not full of people who know how to support a mixed network. It’s not a very common task. Most business environments choose for the user what tools they get to use.  For us, like many small or large environments, it still makes sense for most users to use a Windows machine because of the business type tools we have chosen to do the work.  Music and video professionals still prefer Macs to get their work done in the best possible way.  Therefore, the arts teams had both a Windows machine and a Mac, which I’ve always felt was a waste of resources.  There had to be a better way to help them.

The push-back in most IT shops, including churches is that Macs are too expensive and support for them takes longer since they aren’t designed for business/work environments.  We also get the push-back from users who prefer the Apple over Windows, because that is what they are comfortable with.   I mention this, because once you begin adding Macs to your network, more people are going to want them.  This will be a balance:  providing the tools people want or need with the boundaries of good stewardship, without alienating the end user because they think we are mean.

There is no “Best Practices” really when you support multiple operating systems.  You can accomplish this by one of three methods:  3rd party per client product, Dual Directory method, or extending the schema in Microsoft’s Active Directory.  It really takes analysis of the tools and the structure of the organization and a lot of trial and error.  We have taken note of where many churches have jumped around with multiple 3rd party tools or completely removed Macs from their domain altogether.  We still think we can find a way to make the Mac user’s machines happy and live well among us.

We first integrated our Macs within our domain using the Dual Directory method, otherwise known as the Magic Triangle.  We do subscribe to a Mac Enterprise users group where we have learned how many schools and businesses handle Macs, and they use one of those three ways.

We ran into issues immediately when Snow Leopard came out.  This added issues for Macs on our network and caused them to take forever to login and connect to resources.  It was even more painful if you had a laptop and logged in outside of our network.  This wasn’t too bad for just a few Mac users, since most of them were desktops and the mobile users were willing to put up with these shortcomings.

Then we had some other key users switch to a Mac and it became evident that this problem was going to be worse.  So a year into our integration, we researched and learned that the problem could be because our internal Domain was a .local namespace.  Push came to shove so I brought up the discussion of getting rid of the split DNS.  We agreed at that time that while that sounded like a good idea, there was really no way to know whether or not this would make things significantly better for Mac users, nor provide any benefit to the Windows users.  Since our network has grown to 100+ PC’s, 20+ servers, 18+ Macs, we dropped the idea.

What is the issue with .local and Mac OS X?

This is a challenge to explain but this is the answer that finally made sense to me.  It turned out I really just needed a good picture of how a Mac functions.  Think of it this way…A Mac is designed “to just work” which is wonderful if you are an end user living on an island of your own Mac.  You turn on your Mac and since it’s “name” always ends in .local, it goes and looks for anything with that name in it to be friends with in case you want to share iTunes or anything else with them.

This isn’t so great in a mixed network with a .local namespace. A domain can’t belong to two masters; it can’t be the fun iTunes Library sharing environment and the DNS served business environment.

When you login to a .local domain that uses the Dual Directory method, The Mac does what it knows how to do and uses Multicast DNS to ask all the machines around it if YOU are the one that knows what it is supposed to do…and it doesn’t seem to really listen since there is too many things for it to check through.  Then, once it finds the Domain, it will login.  It takes about 3 minutes for this to happen on site and about 5 minutes or more away from our network, in order to see your desktop and be ready for you to work.

Time to change

It became clear that moving away from the .local namespace was important.  We still couldn’t actually say what this would buy for us since making this change was going to affect every device, every server, and every USER.  The last was my biggest concern.  Was it worth disrupting EVERY user to make the world for Macs better?  We decided it was worth it to try to help those Mac users before we get any larger and before we get to the point that we serve our church across multiple campuses with the same resources.  Currently, we have multiple campuses, but we office out of the same location.

In the recent past the benefits did not seem to outweigh the risks but this year we gained some new dedicated volunteers:  one that supports a mixed environment like us and another one that has the same Windows infrastructure in his work environment as we do. With the solid foundation suddenly so much improved it was time to push the issue again.  My questions were:

  • Why does having split DNS with a .local namespace present such an issue and is it always going to be an issue?
  • What does this do now that Lion has been released which has its own connectivity issues with Windows and there isn’t a supportable way to downgrade newer machines to Snow Leopard to manage our growing need for Macs?

Now, with a Lion breathing down our necks, the benefits outran the risks.

The next blog posts will cover:

The latest challenge we have overcome is related to having a .local internal Domain, in particular Mac clients and Exchange connectivity.  Our MacBook Users in particular had issues with getting email from Exchange via Outlook or Entourage.  The MacBook users had to edit their Account Preferences and change the URL each time they went off campus or came back on campus since the internal .local URL did not work off-site and the external URL didn’t work on site for them.  Though they were gracious enough to do this, this was definitely something we needed addressed.  After some research, we learned this was a known issue for both Exchange Server 2007 and 2010 and there wasn’t a clear cut answer in any of the forums we researched.

The answer is multiple parts since we did have the external mail server name working internally at one point.

  1. The first fix was another entry in our internal DNS that pointed another external server name (_autodisover._tcp in the external DNS name space) to the internal IP address.
  2. The second part was to launch the Exchange Server Console in the Server Configuration-> Client Access and make all of the internal server name references the same as the external (.org) server name references within each URL for OWA, Exchange ActiveSync, OAB, and Exchange Control Panel.
  3. While still inside the Exchange Server, the EWS had to have the same server name listed in the internal reference as the external server name.  This had to be done through a command in the Exchange Management Shell.  It would be similar to this, but for the internal server.
  4. After these things were done, we did the iisreset command to restart IIS (can also be done through the GUI or in Services).
Once we had these changes implemented, it partially worked, which was a puzzle.  It would work on wifi, but not on our LAN.  One of my volunteers did some sniffing and learned that even though we had an Proxy Automatic Configuration file (PAC) for our Proxy settings that told e-mail not to go through the proxy, Outlook was still choosing to go through the Proxy anyway. The entire system, except for Outlook, was using the PAC.
What we did then was manually create the PAC configuration in the Mac’s network settings. This was checking the box to “Exclude Simple Hostnames” in the  Ethernet device Proxy configuration and adding internal server IP addresses, external hostnames including the autodiscover name to the “Bypass Proxy Settings for these Hosts and Domains”.
Once we did that, Outlook now defaults to the external server name (.org) and also works internally.

A couple of months ago I learned I have a LOT of food allergies, via a blood test that tested 150 foods and spices.  I’d always suspected that I was sensitive to foods in the Dairy family, but was very surprised to learn that there were lots of spices, eggs, and yeasts that I am sensitive to as well.  For the first couple of weeks, I admit that I whined a lot and felt sorry for myself.  Now that I have been eating on a rotation diet for two months, I am beginning to see the difference and can now see that it is worth it.

What is a rotation diet?

First off, it is NOT Die with a T!  There are lots of rotation diets out there, but mine takes different foods I CAN have and categorizes them, like meats, grains, vegetables, and spices and rotate them so that they are not repeated for four days.  They say it takes about 4 days to get things out of your system.  Apparently, many of us have food sensitivities that we don’t realize because we repeat them before they get out of our system.

What is the positive?

Health benefits would be one, but honestly, I’m still working on that one.  I am beginning to slowly see benefits like, some weight loss, hungry more often, can swallow without gagging, feet don’t hurt, and more energy.  There are still things that need to improve, but it takes time.

Another positive is that I am exploring more creative ways to eat food.  There were 50 foods for me to avoid, but 100 that I should focus on.  One of my biggest whines was that being sensitive to eggs and yeast means no cookies and bread.  So, I’m slowly looking for alternatives since those were my favorite bad things to eat.  This kind of makes me want to take some cooking classes and share with others having similar issues.

What is challenging?

Eating out is the biggest challenge.  For the most part, I don’t since it is a lot of trouble.  The other thing that is a challenge is that you have to cook every meal and it is a challenge to eat any leftovers.  I’d much rather eat leftovers from dinner for lunch since I can’t just make a sandwich.

What is next?

My nutritionist said that once she’s seen some improvements as my body heals, we can add back some of the foods that I am sensitive to.  Some things that were “low” on my list were things like egg yolks, ginger, and coffee.  It is kind of hard to give up coffee when you are married to someone who roasts their own coffee.  I’m not sure what it looks like to add foods back, but I imagine that they will be rotated in like the foods I’m eating now.

What new fun recipe have I found?

Here is one I have been working on the past few weeks.  It is kind of a challenge to test new recipes like this when you can only eat them every four days.  🙂  This is something I’ve made up from ideas gleaned from several scones recipes I’ve used over the years.  No eggs, cream, wheat, or butter in these tasty treats! (I can have wheat though!)

Coconut Cherry Scones

1 1/4 cups whole spelt flour (or 1/2 cup whole wheat 4/3 cup all purpose flour)
1/4 cup sugar
1 1/2 tsp baking powder
dash of salt
1/8 tsp cloves
1/4 tsp cinnamon
1/4 cup coconut (unsweetened if you can find it)
1/4 cup frozen cherries, chopped
1/2 cup almond milk

Preheat oven to 400.  Line baking sheet with parchment; set aside.  Combine dry ingredients. Add coconut and chopped frozen cherries (still frozen so they don’t bleed too much).  Pour almond milk in at once and stir with a fork until dough forms.  On a floured board, knead 8 times.  Dough is going to be sticky, which is what you want.  Pat dough into a 4″ x 11″ rectangle.  Cut into thirds, then each third is cut on the diagonal so that you have 6 right triangle-shaped scones.  Use a wide knife or spatula to scrape off of the board and onto the parchment-lined sheet.  If you added too much flour you can pat on some almond milk on the tops, then sprinkle sugar on each scone.  Bake 15 minutes. Serve warm or cold.

What I like about these scones is that the moist dough makes moist scones, so I don’t miss the fact that I can’t put butter on them.  I don’t like to cook with homogenized fats, so I eliminate it when I can or make a substitution.  Scones usually have butter in them.  I did try margarine, since I am not supposed to have butter, but I couldn’t tell the difference without either.  Maybe it is the coconut that helps in this case?  Still, now I’m just sad that I have to wait four more days until I can have another!  🙂

I often get requests for instructions for this project.  It has been around for quite a while, but it always gets a fun response!  Here are instructions for the project as well as a video.

The video notes that I used some supplies from a kit club I am part of at The Scrapbook Junkie. They offer a monthly kit subscription with a new kit available on the last day of the month.  The store is in Webster, Texas, but I’m sure she’d be happy to mail the kits to you each month if you wished to subscribe to her club.

Here are the written instructions for the Fabulous Folding Photo Holder.

See Below for video:

This follows previous posts about our Mac integration in our Domain.  If you have not read them, we chose the Dual Directory method (sometimes called Magic Triangle) to integrate our Macs into our existing Windows network. It takes the Macs a long time to find the network and doesn’t always find the user’s home folder.   Kevin has done some network captures using Wireshark and has learned a few things that we have tried. (He has found this book on Wireshark to be most useful.)

First, we’ve experimented with disabling spanning-tree protocol for our client ports and seen about a 20 second improvement from Mac gong sound to ‘other’ showing up the login options (indicating that it knows there is a network). This had a negligible affect on windows clients. Note: on Dell Gigabit switches this is enabling Fast Link on a client port.  We learned that from this blog post.

Secondly, his captures of a Mac booting up and attaching to the network are very interesting. Before the Mac has even sent out a DHCP request is doing multicast DNS queries with an APIPA address to find the Mac domain controller (open LDAP server). Even after it has its IP address it continues to try and use MDNS.

The next moves and nagging questions :

  • investigate having an intelligent MDNS service that proxies the inside DNS
  • investigate if IPv6 DNS might be necessary as we see a lot of IPv6 MDNS requests even from iPhones on the wireless
  • IP Helper– could IP Helper settings on the switches assist with DHCP and cut out time or MDNS in someway?
  • Is it still just an issue with .local domain that OSX might insist belong only to MDNS?

Kevin will continue to analyze the data and has bounced some of this off of his friends Lester and Eric.

If you have experienced this issue, please comment and share what you have learned!

Paul Rhodes found this article, but we would like to do some more research before going this route.  The problem we see here is for our laptop users having multicast issues when they are offsite.

One of the reasons I really like my job is that there are such innovative approaches to helping End Users get their work done.  I think from the technical side of things we can sometimes struggle with the solutions being simple for the End User.  RemoteApps are a wonderful way to fill our need for our database to perform better for the User as well as management and maintenance by our Database Administrator.

A challenge that was becoming an increasing issue with ACS as a RemoteApp is the confusion it would cause staff and volunteers when they would move to a different computer.  The first time you login to a Remote App, you get a screen with a lot of words on it about trusting the security of the server you are trying to connect to.  After that you get another screen that wants your network credentials–which truly confuses the User!  Then, if they don’t tell it to save the network credentials (which we are okay with on site), they have to do it all again the next time they launch the application.  Paul Salvo, one of my volunteers was happy to look to resolve this issue.  Vista and Windows 7 were pretty easy to resolve, and we could push the changes through Group Policy.  Our XP clients required a registry change which we pushed using a filter in Group Policy (Thanks to the guys in the CITRT IRC Chat room who helped me figure out how we needed to push that only to XP machines).  This solution also works if your site has Users working regularly in Remote Desktop servers on your site.

All Windows Clients GPO:

This policy instructs the clients to trust a list of servers for pass-through authentication.  We applied this to the computers in our Group Policy:

  • Computer Configuration
    Administrative Templates
    Credential Delegation
    Allow Delegating Default Credentials with NTLM = Enabled
    Set to TERMSRV/<FQDN of server>
    Set to TERMSRV/<server hostname>
  • We added all of the Remote Desktop Servers we wanted to trust with pass-through.  We created entries with the FQDN and entries with the server’s short name.  We’ve heard that this is recommended.

Windows XP GPO:

  • You really need SP3 and to have the latest RDC on the client machine We used this KB file for that information.

Here is the Registry we imported into Group Policy:

  • HKEY_LOCAL_MACHINEKey path SYSTEM\CurrentControlSet\Control\SecurityProviders
    Name: SecurityProviders
    Value Type: REG_SZ

    msapsspc.dll, schannel.dll, digest.dll, msnsspc.dll, credssp.dll

    Key path SYSTEM\CurrentControlSet\Control\Lsa
    Name: Security Packages
    Value type: REG_MULTI_SZ

Then we created a WMI Filter for XP Machines so that the Registry change only applied to XP Machines.  It wasn’t necessary to apply it to Vista or Windows 7, and I’ve learned to be careful when editing the Registry.

  • Namespace:  root\CIMV2
    Query:  Select * from Win32_OperatingSystem where Version = “5.1.2600”

What this didn’t fix:

Now users double-click the ACS Icon (RemoteApp), they see that it is starting a RemoteApp, then they get the login screen for ACS.  They are mostly happy with the above changes.

The first time a user uses the RemoteApp, they still get this screen.  We think it has to do with the domain not having a Certificate Authority.  We haven’t decided if it is worth all of the extra work.  I imagine we will pursue this in the future, but right now I have my team working on other things.  Basically, the User is instructed to check the box and click Connect.