Last October I went to astricon in Washington DC and managed to lose my noise cancelling headphones somewhere. I ordered a new pair of Panasonic RP-HC700 Noise-Cancelling Headphones and had them shipped to the hotel for the flight home. They worked better than my $50 phillips NC headphones that were 4 or 5 years old, and continued to do so for the first 2 times I used them. Going to Hawaii for new years I noticed a staticy buzzing sound and I found it annoying so I simply turned off the NC function and assumed it was a dead battery. I replaced the battery, and in January on a short Bombardier Q400 Turbo Prop flight to Montana I noticed it again, and had to turn it off again.
This week I put them on so I wouldn't have to listen to the Vacuum cleaner at home, and the static is horrible with the Noise Cancelling function turned on, and with the NC function turned off I still hear it, although at a reduced level, in my right ear. Crap, but no problem things break, that's why they normally provide a 1 to 5 year warranty. I went to panasonic.com and filled out the contact form saying what happened and received this reply 24 hours later:
Thank you for contacting Panasonic.
Based on the information you have provided, we recommend the product be sent to our Customer Service Center in McAllen Texas. In accordance with the warranty, the product will be replaced with a reconditioned model. Please send the product along with a copy of the receipt and a letter to explain the problem to: Panasonic Customer Service Center 4900 George McVay Dr. Suite B Door #12 McAllen, TX 78503
The warranty period for the product is ninety days for labor and parts coverage for service or replacement with a reconditioned model. The warranty period begins at the time of purchase. The original sales receipt or a copy of the sales receipt is required to validate warranty service.
If the unit was purchased more than ninety days, you can still ship the product to the address listed above, however, it will be replaced with a reconditioned model for a fee. To obtain information regarding the cost to replace the product, please call (800) 211-7262.
So I called the phone number, told it that i was calling about headphones, gave it the model number, and the IVR then said "You may ship these back to us at (5 second pause) and we will replace them for a fee, but call before you do. Can I help you further?"
When I said yes it went back to the beginning of the IVR. Isn't that helpful, it failed to give me the address, and provided no way for me to call, considering I was already on the phone and answering yes kicked me back to the main menu. I gave up and said operator and pressed 0 on my phone 10 times and eventually I got put on hold. The hold music was a deafening white noise static(similar to what my headphones produce), after 5 minutes of this I was connected to a surly woman who did not sound like she enjoyed her job. Amazingly she was dumbfounded that I was told to call them because according to her they provide no warranty to Panasonic equipment unless bought from Panasonic Direct. Yes, you read that right.
1) The email said to call, and I did, and the phone number was dumbfounded.
2) If you go to best buy, amazon, or any other retailer and buy a panasonic product they will not provide warranty support. Your only recourse is to return the product to the retailer. Most retailers have a 30 day return policy, meaning your product has zero support 30 days from the date of purchase.
3) The answer I got via email was not the same answer I got via phone, but neither one was anywhere close to acceptable.
Moral of the story, I got one flight out of my $110 headphones, and Panasonic does not stand behind their product or help you get your product replaced when it fails. This is the only Panasonic product I can remember having purchased in the last 5 years, and it will be the absolute last Panasonic product I ever buy if they refuse to stand by the quality of their product.
Wednesday, May 18, 2011
Tuesday, May 17, 2011
Upgrading from Microsoft Dynamics CRM 4.0 to CRM 2011 Installation Failures
Update: I fixed the problem and you can see the short answer version at the bottom of this post
We have an existing Microsoft Dyanmics CRM 4.0 implementation and I decided to see if I could upgrade it to 2011 without too much hassle (the answer was no), and I ran in to several problems that I couldn't find very much information about. Because our CRM 4.0 solution is set up on a Windows Server 2003 machine I had to build a new server with Windows 2008 R2 and SQL Server 2008 R2 to support Microsoft Dynamics CRM 2011. In order to avoid breaking our production system I took a backup of our existing CRM implementation, copied it to the new server, restored it under the same name, and then did all of my import testing there. Please note that when you import a database it makes tons of modifications to it making the old implementation useless, so it's smart to do all of your testing off of a copy of your production database.
Problems:
1) I didn't read the full requirements list to find out SQL Express wasn't supported, so when I saw it on the ISO I assumed that it would work. Think again, CRM requires a full version of SQL. Why is the express version on the ISO if it's not supported? Seriously?
Time Lost: 20 Minutes to install and then uninstall it.
2) After getting over that hurdle I opened the CRM splash.exe launcher. I clicked install on "Install Microsoft Dynamics CRM Server" and it crashed like so:
So I added this value (IsIntegrationUser,bit) to the table SystemUserExtensionBase thinking maybe if it exists it will let me delete it cleanly, and.... SUCCESS!!!! I got past this error. Now I made a new backup of the database, copied it to my CRM 2011 server, imported it in to SQL, and attempted to import the 4.0 implementation in to 2011.
I ended up getting the same error as before:
System.Data.SqlClient.SqlException: Column names in each table must be unique. Column name 'IsIntegrationUser' in table 'SystemUserBase' is specified more than once.
In theory now the IsIntegrationUser record should be deleted, but the column still exists in the SystemUserBase table. I removed the column from the table and re-ran the import. It made it past this step and everything seems to be good.
We have an existing Microsoft Dyanmics CRM 4.0 implementation and I decided to see if I could upgrade it to 2011 without too much hassle (the answer was no), and I ran in to several problems that I couldn't find very much information about. Because our CRM 4.0 solution is set up on a Windows Server 2003 machine I had to build a new server with Windows 2008 R2 and SQL Server 2008 R2 to support Microsoft Dynamics CRM 2011. In order to avoid breaking our production system I took a backup of our existing CRM implementation, copied it to the new server, restored it under the same name, and then did all of my import testing there. Please note that when you import a database it makes tons of modifications to it making the old implementation useless, so it's smart to do all of your testing off of a copy of your production database.
Problems:
1) I didn't read the full requirements list to find out SQL Express wasn't supported, so when I saw it on the ISO I assumed that it would work. Think again, CRM requires a full version of SQL. Why is the express version on the ISO if it's not supported? Seriously?
Time Lost: 20 Minutes to install and then uninstall it.
2) After getting over that hurdle I opened the CRM splash.exe launcher. I clicked install on "Install Microsoft Dynamics CRM Server" and it crashed like so:
I did some searching through the ISO and found the Server\amd64\SetupServer.exe file and ran that. From some google searches it looks like having IE9 installed on your system causes this to happen, with no explanation or solution other than to go to the setup files directly. Irritating. I did confirm this theory by uninstalling Internet Explorer 9 from the View Installed Updates of the Add Remove Programs section in the control panel, and then the splash screen worked as it should.
Time Lost: 10 minutes to find each application and the SRS Extensions executables to run by hand.
10 more minutes to uninstall IE9 to verify that was the problem for the purposes of this post.
3) In order to upgrade my existing installation without breaking it I copied the CRM database to my 2011 server and imported that. 10 minutes in to the upgrade process it fails with this error: "Column names in each table must be unique.
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Data.SqlClient.SqlException: Column names in each table must be unique. Column name 'IsIntegrationUser' in table 'SystemUserBase' is specified more than once.
This is a confusing error because the column does exist in the database once so I'm curious what the upgrade process was attempting to do when it caused this problem. I looked at the SystemUserBase table in the CRM 2011 database and it looks fairly similar to the 4.0 database but with a few more columns. Searching around I found very little, there was one post saying it may be a case sensitivity issue but the column is named exactly as it should be so that isn't it.
I decided to rename the column to IsIntegrationUserOld because the column was set to 0 for every user. I ran the import again and this time I got a new error:
System.Exception: Action Microsoft.Crm.Tools.Admin.UpgradeDatabaseAction failed. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Data.SqlClient.SqlException: There is already an object named 'DF_SystemUserBase_IsIntegrationUser' in the database.
Okay so that didn't work, I deleted my import, restored my database again and this time I simply deleted the IsIntegrationUser column. 10 minutes later it failed again, but with a new error!:
Error| Exception occured during Microsoft.Crm.Tools.Admin.OrganizationUpgrader: Action Microsoft.Crm.Tools.Admin.UpgradeDatabaseAction failed. InnerException: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentException: An item with the same key has already been added..
A suggestion on the CRM Forums in this post was that I shouldn't have this field, so on my production system I should mark it as a custom field then remove it in the Customizability section. I used this command to do so:
Update attribute set IsCustomField = 1 where name = 'IsIntegrationUser'
Then I went to this screen and removed the IsIntegrationUser attribute (I took this screenshot after removing it, so you won't see it in my list):
I did this and got a generic error. I checked the Event Viewer for more detail and it didn't give me anything, so I went to discover how to enable logging. Turns out you need to use Tracing, so I enabled tracing by updating my registry entry to look like this with help from the KB article here http://support.microsoft.com/kb/907490:
With tracing enabled I performed the delete again, got the error message, and found this error in the log file:
ALTER TABLE SystemUserExtensionBase DROP COLUMN IsIntegrationUser Exception: System.Data.SqlClient.SqlException: ALTER TABLE DROP COLUMN failed because column 'IsIntegrationUser' does not exist in table 'SystemUserExtensionBase'.
Error| Exception occured during Microsoft.Crm.Tools.Admin.OrganizationUpgrader: Action Microsoft.Crm.Tools.Admin.UpgradeDatabaseAction failed. InnerException: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentException: An item with the same key has already been added..
A suggestion on the CRM Forums in this post was that I shouldn't have this field, so on my production system I should mark it as a custom field then remove it in the Customizability section. I used this command to do so:
Update attribute set IsCustomField = 1 where name = 'IsIntegrationUser'
Then I went to this screen and removed the IsIntegrationUser attribute (I took this screenshot after removing it, so you won't see it in my list):
I did this and got a generic error. I checked the Event Viewer for more detail and it didn't give me anything, so I went to discover how to enable logging. Turns out you need to use Tracing, so I enabled tracing by updating my registry entry to look like this with help from the KB article here http://support.microsoft.com/kb/907490:
With tracing enabled I performed the delete again, got the error message, and found this error in the log file:
ALTER TABLE SystemUserExtensionBase DROP COLUMN IsIntegrationUser Exception: System.Data.SqlClient.SqlException: ALTER TABLE DROP COLUMN failed because column 'IsIntegrationUser' does not exist in table 'SystemUserExtensionBase'.
So I added this value (IsIntegrationUser,bit) to the table SystemUserExtensionBase thinking maybe if it exists it will let me delete it cleanly, and.... SUCCESS!!!! I got past this error. Now I made a new backup of the database, copied it to my CRM 2011 server, imported it in to SQL, and attempted to import the 4.0 implementation in to 2011.
I ended up getting the same error as before:
System.Data.SqlClient.SqlException: Column names in each table must be unique. Column name 'IsIntegrationUser' in table 'SystemUserBase' is specified more than once.
In theory now the IsIntegrationUser record should be deleted, but the column still exists in the SystemUserBase table. I removed the column from the table and re-ran the import. It made it past this step and everything seems to be good.
Time Lost: 2 Days through Trial and Error and Troubleshooting (every attempt took 10 to 20 minutes)
Final Solution:
Final Solution:
- On the CRM 4.0 Server
- Update attribute set IsCustomField = 1 where name = 'IsIntegrationUser'
- Modify the SystemUserExtensionBase table and add a column for IsIntegrationUser with type bit.
- Navigate to Settings -> Customization > User (Double click) -> Attributes -> and delete the IsIntegrationUser value
- Now go to the SystemUserBase table and delete the IsIntegrationUser column if it still exists.
- Backup the CRM Database from SQL Management Studio
- On the CRM 2011 Server
- Restore the 4.0 CRM Database to SQL
- Navigate to Organizations, and import the old CRM Database
Thursday, February 17, 2011
How to create a reverse HTTP(s) failover web proxy using nginx & heartbeat
Update: There is a Virtual Machine Image available at the bottom of this post
Currently we use a pair of very expensive F5 load balancers to manage our highly available SaaS application, providing SSL offloading and round robin load balanacing with failover in the event one of the SaaS nodes fails. I decided I wanted to see if I could replace all of the functionality we use from those F5's with a custom built (and free) solution. I spoke to my coworker JW who works on the SaaS app and has recently been working with other vendors such as Barracuda to find a cheaper device (Our F5's are aging) to replace the F5's and got a list of requirements from him so I would know exactly what we needed. I spent some time researching and came across nginx which seemed like it would be a suitable package as it provided web proxying, SSL offloading, failover, and sticky sessions (but only in more recent builds). It took me about half an hour to create a proof of concept with simple web HTTP traffic and another hour to get SSL working and refine/test the configuration. These are instructions on how to create a set of failover load balancer devices using heartbeat and nginx on Debian Lenny. If you don't need the heartbeat failover cluster then skip all of the instructions that involve heartbeat. Configuration for different linux platforms should be similar as well.
Start with a base install of Debian (two if you want clustering)
Configure the network interface with a static IP Address
create a static ip address:
/etc/network/interfaces:
allow-hotplug eth0
iface eth0 inet static
address 192.168.11.250
netamsk 255.255.255.0
gateway 192.168.11.1
Configure Apt Package Manager
In order to tell apt to install nginx from the testing tree we hard code the system to use the testing tree for nginx.
Install Packages
Configure Heartbeat
Modify /etc/ha.d/authkeys on both boxes:
auth 2
2 sha1 SOMETHING_UNIQUE
You will need to make sure you keep your configurations the same on both servers, using a tool like rsync can automate this process for you but for the purposes of this tutorial I'm going to leave it up to you to decide how to keep your nodes in sync.
Currently we use a pair of very expensive F5 load balancers to manage our highly available SaaS application, providing SSL offloading and round robin load balanacing with failover in the event one of the SaaS nodes fails. I decided I wanted to see if I could replace all of the functionality we use from those F5's with a custom built (and free) solution. I spoke to my coworker JW who works on the SaaS app and has recently been working with other vendors such as Barracuda to find a cheaper device (Our F5's are aging) to replace the F5's and got a list of requirements from him so I would know exactly what we needed. I spent some time researching and came across nginx which seemed like it would be a suitable package as it provided web proxying, SSL offloading, failover, and sticky sessions (but only in more recent builds). It took me about half an hour to create a proof of concept with simple web HTTP traffic and another hour to get SSL working and refine/test the configuration. These are instructions on how to create a set of failover load balancer devices using heartbeat and nginx on Debian Lenny. If you don't need the heartbeat failover cluster then skip all of the instructions that involve heartbeat. Configuration for different linux platforms should be similar as well.
Start with a base install of Debian (two if you want clustering)
Configure the network interface with a static IP Address
create a static ip address:
/etc/network/interfaces:
allow-hotplug eth0
iface eth0 inet static
address 192.168.11.250
netamsk 255.255.255.0
gateway 192.168.11.1
The version of nginx we need is in the testing tree, so we need to configure apt by adding these lines to sources.list.
/etc/apt/sources.list:
deb http://debian.osuosl.org/debian testing main contrib non-free
deb-src http://debian.osuosl.org/debian testing main contrib non-free
/etc/apt/preferences:
Package: *
Pin: release a=stable
Pin-Priority: 900
Package: nginx
Pin: release a=testing
Pin-Priority : 1000
Install Packages
apt-get update
apt-get install heartbeat nginx
On each server you need to modify ha.cf to look like this. On node one you put the IP of node two in OtherNodeStaticIP, and on node two you use node 1's ip for OtherNodeStaticIP.
/etc/ha.d/ha.cf
udpport 694
bcast eth0
mcast eth0 225.0.0.1 694 1 0
ucast eth0 <OtherNodeStaticIP>
node NODENAME1
node NODENAME2
auto_failback off
logfacility local0
/etc/ha.d/haresources
NODENAME1 CLUSTERIP nginx
cp /etc/init.d/nginx /etc/ha.d/resource.d/nginx
Certificates
If you need reverse SSL support to provide SSL encryption for a web service that is not SSL, or you would like to offload the SSL process from your webserver to another device, you will need to configure SSL certificates. I created a new folder, /etc/nginx/sslcerts and did all of my work in there.
We use the entrust certification authority, so we got the certificate and key file for the certificate we wanted and put it in the /etc/nginx/sslcerts directory. Additionally, you need to concatenate the middle certificate from your Certificate Signing Authority to teh end of your .crt file.
copy your crt and key to /etc/nginx/sslcerts/CERTIFICATE.key/crt
cat EntrustMiddleCert.crt >> /etc/nginx/sslcerts/CERTIFICATE.crt
Modify /etc/ha.d/authkeys on both boxes:
auth 2
2 sha1 SOMETHING_UNIQUE
Configure nginx
In our configuration we have 2 clusters, one for the main site, one for reporting, and a final server that hosts a simple page saying the site is down. This configuration provides all of these features, and in the event that both nodes of the cluster are down the user will get the error page defined instead of a 404 error or something similar. This configuration example includes HTTP & HTTPS reverse proxy configurations, although you may or may not want to provide both. Edit to your liking.
/etc/nginx/sites-enabled/default
upstream SITENAME_cluster {
ip_hash; #Sticky sessions
server 10.32.41.230:20780 max_fails=3 fail_timeout=31s;
server 10.32.41.231:20780 max_fails=3 fail_timeout=31s;
}
upstream inetsoft {
ip_hash; #Sticky sessions
server 10.32.41.113:8080 max_fails=3 fail_timeout=31s;
server 10.32.41.114:8080 max_fails=3 fail_timeout=31s;
}
#the server that servers our notice that the application is unavailable
upstream errorpage {
server status.localdomain.com max_fails=3 fail_timeout=31s;
}
#If you want to redirect all HTTP traffic to the HTTPS site, use this instead of following the port 80 declaration below
#server {
# listen 80;
# server_name SITENAME;
# rewrite ^(.*) https://$host$1 permanent;
#}
server {
listen 80;
server_name SITENAME;
location /Reporting {
proxy_pass http://inetsoft$request_uri;
}
location / {
proxy_pass http://SITENAME_cluster$request_uri;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
### Set headers ####
proxy_set_header X-Real-IP $remote_addr;
}
error_page 502 /error.html;
location /error.html
{
proxy_pass http://errorpage/error.php?Novell;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 443;
server_name SITENAME;
ssl on;
ssl_certificate /etc/nginx/sslcerts/CERTIFICATE.crt;
ssl_certificate_key /etc/nginx/sslcerts/ CERTIFICATE.key;
ssl_session_timeout 90m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location /Reporting {
proxy_pass http://inetsoft$request_uri;
}
location / {
proxy_pass http://SITENAME_cluster$request_uri;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
### Set headers ####
proxy_set_header X-Real-IP $remote_addr;
}
error_page 502 /error.html;
location /error.html
{
proxy_pass http://errorpage/error.php?SITENAME;
proxy_set_header X-Real-IP $remote_addr;
}
}
You will need to make sure you keep your configurations the same on both servers, using a tool like rsync can automate this process for you but for the purposes of this tutorial I'm going to leave it up to you to decide how to keep your nodes in sync.
Test your nginx configuration for errors, /etc/init.d/nginx start. If you get any error messages be sure to resolve those, the http://wiki.nginx.org site was very useful for me. If everything works properly you are good. If you are using clustering, now you need to start and test your Linux HA cluster.
/etc/init.d/heartbeat start
Congratuations, you have a Reverse HTTP(s) web proxy with failover clustering, and it didn't cost you any body parts.
Wednesday, February 9, 2011
Asterisk with Exchange 2010 SP1 Unified Messaging
Today I decided I wanted to play with using Microsoft Exchange Server Unified Messaging for voicemail in our Digium Asterisk 1.6.2 deployment. Already having an Exchange 2010 Deployment all I needed to do was install the UM Component in our Exchange infrastructure and configure everything.
Unified messaging provides voicemail transcription support and automatically delivers the voicemail to the users mailbox with as much information about the caller as possible. If it is another Unified Messaging enabled user it will include all of their contact info based on their callerid. For example when I call somebody from extension 1593 in asterisk and leave another UM user a voicemail, and I have 1593 configured in UM, they will get my cell phone number, email address, and title along with the transcription of the email and an MP3 of the voicemail. Slick.
One thing to note is the transcription uses a fair amount of CPU power, it took about 30 seconds for it to transcribe a 30 second voicemail, while fully utilizing a core of a Xeon E5430 processor and 250 megabytes of memory. If you have a lot of voicemail messages being recorded you may need to watch your system utilization closely.
Install Unified Messaging Component
Unified messaging provides voicemail transcription support and automatically delivers the voicemail to the users mailbox with as much information about the caller as possible. If it is another Unified Messaging enabled user it will include all of their contact info based on their callerid. For example when I call somebody from extension 1593 in asterisk and leave another UM user a voicemail, and I have 1593 configured in UM, they will get my cell phone number, email address, and title along with the transcription of the email and an MP3 of the voicemail. Slick.
One thing to note is the transcription uses a fair amount of CPU power, it took about 30 seconds for it to transcribe a 30 second voicemail, while fully utilizing a core of a Xeon E5430 processor and 250 megabytes of memory. If you have a lot of voicemail messages being recorded you may need to watch your system utilization closely.
Install Unified Messaging Component
- UM Requires the desktop experience pack in Windows Server 2008 R2, so I installed that and rebooted
- Next, you need to have the Microsoft Server Speech Platform Runtime (X64) installed. If you run the installer it will tell you to install it, and provides you this link to download and install it. You must use this link, if you just search for and download the platform run time you will not get the right version and the check will still fail.
- Finally, if the Windows Firewall service is disabled the install will fail with the following message so make sure the service is set to automatic or manual start. Thanks to this blog post for helping me figure this one out:
- The following error was generated when "$error.Clear(); install-UMService " was run: "There are no more endpoints available from the endpoint mapper. (Exception from HRESULT: 0x800706D9)".
- There are no more endpoints available from the endpoint mapper. (Exception from HRESULT: 0x800706D9)
- Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.1.267.0&e=ms.exch.err.Ex88D115&l=0&cl=cp
- Once you have the prerequisites installed you can install the UM component using your installation media.
- Once the installation is complete you need to configure UM.
Configure Unified Messaging
- First you need to create a Dial Plan
- We use 4 digit extensions and we are in the United States, so I configured the settings accordingly.
- Now create a UM IP Gateway to point at your Asterisk server, entering the IP Address of your Asterisk box and selecting the Dial Plan you just created.
- Next I configured the UM Mailbox Policies to use 4 digit pin length instead of the default 6.
- Finally I configured my Auto Attendant with an extension to provide directory lookups, etc.
Configure Asterisk
- Next configure your Asterisk server. You need to add a SIP Peer to sip.conf:
- [exchangeum]
- host=192.168.11.31 <-- IP of your exchange server w/ UM Installed
- type=friend
- insecure=very
- transport=tcp
- port=5065
- context=from-ocs <-- Context you want calls that come from exchange to go to. I pointed it at the same context we use for our OCS/Lync deployment
- Configure your dial plan to use Unified Messaging for voicemail instead of Asterisk's Voicemail() App.
- This dialplan will call my phone whenever somebody dials 1593, and if I don't answer on my desk phone, or on Lync (OCS_TRUNK_R2) it will send the call to my unified messaging mailbox.
- exten => 1593,1,Dial(SIP/1593&SIP/OCS_TRUNK_R2/+2593,25)
- exten => 1593,n,SipAddHeader(Diversion:<tel:1593>)
- exten => 1593,n,Dial(SIP/1593@exchangeum)
- exten => 1593,n,Hangup
- You can manually configure one user at a time in the Exchange Console, or you can use a powershell script to do it in bulk. I found a useful tool available here to allow you to bulk add users to Unified Messaging.
- I modified his tool a little bit to make it simplify deployment, making it take username,extension in a CSV delimited file.
- You can see the sample CSV here, and download the modified powershell script here
Tuesday, January 18, 2011
Microsoft Lync Server 2010 Integration with Digium Asterisk
We have been using Microsoft Office Communications Server for a long time now, and have recently begun testing Lync. It took mere minutes for me to get Lync working with Asterisk 1.6.2. The same steps work for Asterisk 1.8 as well, and for the most part are the same in trixbox, freepbx, elastix, etc.
Asterisk Configuration
First, I configured my Asterisk installation, you have to enabled TCP SIP, and create a peer. I added the following to the sip.conf general section:
[general]
tcpbindaddr=0.0.0.0
tcpenable=yes
[LYNC_TRUNK]
type=peer
host=192.168.11.4
qualify=no
transport=tcp,udp
canreinvite=no
port=5068
disallow=all
allow=ulaw
context=from-ocs
You will need to make a test extension in your normal dial context to test calling in to Lync, something like this should work
exten => 1000,1,Dial(SIP/+2593@LYNC_TRUNK,30)
Asterisk Configuration
First, I configured my Asterisk installation, you have to enabled TCP SIP, and create a peer. I added the following to the sip.conf general section:
[general]
tcpbindaddr=0.0.0.0
tcpenable=yes
and then I added my peer for Lync
[LYNC_TRUNK]
type=peer
host=192.168.11.4
qualify=no
transport=tcp,udp
canreinvite=no
port=5068
disallow=all
allow=ulaw
context=from-ocs
I used the context from-ocs, so I set up a basic context in extensions.conf, cutting off the + that comes through and sending the call to the normal internal call context for normal call handling:
[from-ocs]
exten => _+1XXX,1,Goto(internal-call,${EXTEN:1},1)
exten => _+1XXXXXXXXXX,1,Dial(SIP/${EXTEN}@OUTBOUND_PROVIDER,30)
exten => _+1XXXXXXXXXX,1,Dial(SIP/${EXTEN}@OUTBOUND_PROVIDER,30)
You will need to make a test extension in your normal dial context to test calling in to Lync, something like this should work
exten => 1000,1,Dial(SIP/+2593@LYNC_TRUNK,30)
After restarting asterisk that side of things should be configured, now just to make sure you have Lync configured.
Lync Configuration (Standard Edition)
- Open the Lync 2010 Topology Builder
- Edit the properties of your standard edition pool
- Install the mediation server, I used the Collocated option because load is low enough it doesn't need a dedicated server.
- Under mediation server take note of the TCP Listening port, as that is the port you need to specify in sip.conf of Asterisk. It defaults to 5068 so that's what I used.
- At the bottom find the section "The following gateways are associated with this mediation server."
- Click New, and enter the IP address of your Asterisk server, and the port you use for Asterisk TCP SIP (5060 by default).
- Note that mine has a red X saying i already have this address configured. This is because I was redoing the steps for this tutorial. Also note I have two gateways added as my Asterisk server has multiple IP Addresses and you must make sure to enter all addresses in the gateway list.
- Once this is done make sure to publish your topology and re-run setup if you did not previously have the mediation service installed. I restarted my entire Lync server at this point but if you'd prefer you should be able to restart the Mediation service to apply the new settings.
- Open your Lync control panel, go to users, edit your test user and enable Telephony for Enterprise Voice.
- We use 2XXX range extensions in Lync, and 1XXX range extensions in Asterisk, so you will see tel:+2593, and ext=1593. The ext=1593 is for PSTN Conferencing support so Lync can see me
call from 1593 and know I am Andrew, and automatically authenticate me in to my own conferences..
- Configure Voice Routing -> Dial Plan. These are the rules I wrote in OCS 2007 R2 and imported using import-cslegacyconfiguration. You will need to tweak them for your requirements.
- At this point you should be able to make/receive calls to and from Lync/Asterisk.
Exchange 2010 Backup Failure with Data Protection Manager 2010
About a month ago we created a new mailbox store on our Exchange 2010 SP1 server, and last night I started getting alarms that the server was running low on disk space on the drive that contains the transaction logs. This happened because the new store was not automatically picked up by DPM 2010, and the transaction logs had built up over time eating away at the available disk space. My fault for forgetting to verify DPM had the store being backed up.
I went to add it in DPM and got the following error upon expanding the exchange server to add the new database store:
DPM could not enumerate application component Microsoft Exchange Server\Microsoft Information Store\csgtacex1\17a465cc-90ca-4abd-927f-9aed49f33b5e on protected computer csgtacex1.domain.com. (ID: 964)
Please make sure that writer is in good state.
I discovered a VSS error on my exchange server which I fixed by cleaning up the ProfileList entry that had .bak attached to the end of it. This is something i've seen break on Windows Vista machines recently, so I found it interesting that it started causing problems on an Exchange server.
To clean up the entries with .bak open regedit, and navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\ and remove any keys that have .bak appended to them. For good measure I made a backup before deleting the key. After rebooting the VSS errors were gone, but DPM still didn't work.
I googled the DPM specific error and found this post on technet about the same error. My system had everything working in vssadmin list writers, so further down Frans posts about missing registry keys for powershell. This got me curious because I had noticed since upgrading to Exchange 2010 Service Pack 1 that the power shell had disappeared. I immediately became suspicious. His solution was to export the missing keys from another server that had them, and then import them on the missing server. I had another server with the keys there, made an export, imported it, and DPM immediately worked. If you would like, this is the .reg file I exported. Save this to a .reg and import it and you should be good to go.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns]
@=""
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.Exchange.Management.PowerShell.E2010]
"CustomPSSnapInType"="Microsoft.Exchange.Management.PowerShell.AdminPSSnapIn"
"ApplicationBase"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin"
"AssemblyName"="Microsoft.Exchange.PowerShell.Configuration, Version=14.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
"Description"="Admin Tasks for the Exchange Server"
"ModuleName"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin\\Microsoft.Exchange.PowerShell.Configuration.dll"
"PowerShellVersion"="1.0"
"Vendor"="Microsoft Corporation"
"Version"="14.0.0.0"
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.Exchange.Management.PowerShell.Setup]
"CustomPSSnapInType"="Microsoft.Exchange.Management.PowerShell.SetupPSSnapIn"
"ApplicationBase"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin"
"AssemblyName"="Microsoft.Exchange.PowerShell.Configuration, Version=14.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
"Description"="Setup Tasks for the Exchange Server"
"ModuleName"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin\\Microsoft.Exchange.PowerShell.configuration.dll"
"PowerShellVersion"="1.0"
"Vendor"="Microsoft"
"Version"="14.0.0.0"
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.Exchange.Management.Powershell.Support]
"CustomPSSnapInType"="Microsoft.Exchange.Management.Powershell.Support.SupportPSSnapIn"
"ApplicationBase"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin"
"AssemblyName"="Microsoft.Exchange.Management.Powershell.Support, Version=14.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
"Description"="Support Tasks for the Exchange Server"
"ModuleName"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin\\Microsoft.Exchange.Management.Powershell.Support.dll"
"PowerShellVersion"="1.0"
"Vendor"="Microsoft Corporation"
"Version"="14.0.0.0"
I went to add it in DPM and got the following error upon expanding the exchange server to add the new database store:
DPM could not enumerate application component Microsoft Exchange Server\Microsoft Information Store\csgtacex1\17a465cc-90ca-4abd-927f-9aed49f33b5e on protected computer csgtacex1.domain.com. (ID: 964)
Please make sure that writer is in good state.
I discovered a VSS error on my exchange server which I fixed by cleaning up the ProfileList entry that had .bak attached to the end of it. This is something i've seen break on Windows Vista machines recently, so I found it interesting that it started causing problems on an Exchange server.
To clean up the entries with .bak open regedit, and navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\ and remove any keys that have .bak appended to them. For good measure I made a backup before deleting the key. After rebooting the VSS errors were gone, but DPM still didn't work.
I googled the DPM specific error and found this post on technet about the same error. My system had everything working in vssadmin list writers, so further down Frans posts about missing registry keys for powershell. This got me curious because I had noticed since upgrading to Exchange 2010 Service Pack 1 that the power shell had disappeared. I immediately became suspicious. His solution was to export the missing keys from another server that had them, and then import them on the missing server. I had another server with the keys there, made an export, imported it, and DPM immediately worked. If you would like, this is the .reg file I exported. Save this to a .reg and import it and you should be good to go.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns]
@=""
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.Exchange.Management.PowerShell.E2010]
"CustomPSSnapInType"="Microsoft.Exchange.Management.PowerShell.AdminPSSnapIn"
"ApplicationBase"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin"
"AssemblyName"="Microsoft.Exchange.PowerShell.Configuration, Version=14.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
"Description"="Admin Tasks for the Exchange Server"
"ModuleName"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin\\Microsoft.Exchange.PowerShell.Configuration.dll"
"PowerShellVersion"="1.0"
"Vendor"="Microsoft Corporation"
"Version"="14.0.0.0"
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.Exchange.Management.PowerShell.Setup]
"CustomPSSnapInType"="Microsoft.Exchange.Management.PowerShell.SetupPSSnapIn"
"ApplicationBase"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin"
"AssemblyName"="Microsoft.Exchange.PowerShell.Configuration, Version=14.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
"Description"="Setup Tasks for the Exchange Server"
"ModuleName"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin\\Microsoft.Exchange.PowerShell.configuration.dll"
"PowerShellVersion"="1.0"
"Vendor"="Microsoft"
"Version"="14.0.0.0"
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.Exchange.Management.Powershell.Support]
"CustomPSSnapInType"="Microsoft.Exchange.Management.Powershell.Support.SupportPSSnapIn"
"ApplicationBase"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin"
"AssemblyName"="Microsoft.Exchange.Management.Powershell.Support, Version=14.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
"Description"="Support Tasks for the Exchange Server"
"ModuleName"="C:\\Program Files\\Microsoft\\Exchange Server\\V14\\bin\\Microsoft.Exchange.Management.Powershell.Support.dll"
"PowerShellVersion"="1.0"
"Vendor"="Microsoft Corporation"
"Version"="14.0.0.0"
Wednesday, December 22, 2010
Exchange 2010 SP1 Removing Automatically Discovered Mailbox
Not too long ago we upgraded our Microsoft Exchange 2010 server to Service Pack 1. A few days later I noticed a mailbox I had granted myself full access permission to automatically appeared in Outlook on all of my computers, without my doing anything. A quick search and I found this article saying it's a new feature which is pretty awesome as far as I'm concerned. I then noticed it was impossible to remove this from Outlook. Even after removing myself from the permissions list in the Exchange Management console the mailbox continued to show up. The article shows where it adds the delegate, so I went to the record and lo and behold the link to myself is still there. I had to manually delete the delegate in ADSI Edit, close and re-open Outlook and then the mailbox disappeared. Hopefully Microsoft will fix this bug in the near future.
How to remove the link:
Notes:
* Mailboxes you had granted yourself access too before you installed SP1 will not auto discover
* You will have to manually remove this entry for each mailbox you want to remove from Outlook. Annoying.
How to remove the link:
- Open ADSI Edit.
- Navigate to the user object under Default Naming Context.
- Open the properties of the user and scroll down to MsExchDelegateListLink.
- Remove the value for yourself. In this case I removed Andrew.
- Wait for any replication to happen in your AD infrastructure.
- Close & re open Outlook. Within a few seconds the mailbox should disappear.
Notes:
* Mailboxes you had granted yourself access too before you installed SP1 will not auto discover
* You will have to manually remove this entry for each mailbox you want to remove from Outlook. Annoying.
Subscribe to:
Posts (Atom)