XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Archive for the ‘Deployment’ Category

OS X Yosemite 10.10 + cURL 7.37.1 – CA Certificate Issue & curl_ssl_verifypeer Flag

Sunday, June 28th, 2015

If you are using Opauth-Twitter and suddenly you find that the Twitter OAuth is failing on OS X Yosemite, then it could be because  of the CA certificate issue.

In OS X Yosemite 10.10, they switched cURL’s version from 7.30.0 to 7.37.1 [curl 7.37.1 (x86_64-apple-darwin14.0) libcurl/7.37.1 SecureTransport zlib/1.2.5] and since then cURL always tries to verify the SSL certificate of the remote server.

In the previous versions, you could set curl_ssl_verifypeer to false and it would skip the verification. However from 7.37, if you set curl_ssl_verifypeer to false, it complains “SSL: CA certificate set, but certificate verification is disabled”.

Prior to version 0.60, tmhOAuth did not come bundled with the CA certificate and we used to get the following error:

SSL: can’t load CA certificate file <path>/vendor/opauth/Twitter/Vendor/tmhOAuth/cacert.pem

You can get the latest cacert.pem from here http://curl.haxx.se/ca/cacert.pem and saving it under /Vendor/tmhOAuth/cacert.pem (Latest version of tmbOAuth already has this in their repo.)

And then we need to set the $defaults (Optional parameters) curl_ssl_verifypeer to true in TwitterStrategy.php on line 48.

P.S: Turning off curl_ssl_verifypeer is actually a bad security move. It can make your server vulnerable to man-in-the-middle attack.

Fixing Perl Warning: Setting locale failed on Mac OS X Mavericks

Sunday, January 12th, 2014

I use SSH to connect to my servers for executing various deployment and monitoring scripts. Of late, whenever I ran my scripts, I kept getting this annoying perl warning:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

At first, I thought this must be due to some changes on my server. So I tried to set the LANG environment variable in bash on the server. No luck!

Later I realised, it has to do with my recent upgrade to Mac OS X Mavericks. Turns out that if you are using SSH, there are 2 variables which need to be set on your local machine, which gets passes down to your sever when you connect via SSH.

After adding the following lines to ~/.bash_profile on my local machine, the warning went away:

export LC_CTYPE=en_US.UTF-8
export LC_ALL=en_US.UTF-8

Stripping out .html from your URLs

Wednesday, December 11th, 2013

Just learned this little trick in apache’s .htaccess file to strip out the trailing .html from my URLs:

RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}\.html -f
RewriteRule ^(.*)$ $1.html

After adding the above lines to the .htaccess file, when I request for http://nareshjain.com/about it automatically serves http://nareshjain.com/about.html page. It also handles named anchor’s very well. For example http://nareshjain.com/services/clients#testimonials works perfectly fine as well.

Rationale behind this:

  • My URLs are shorter
  • Gives me the flexibility to use some web-framework to serve my pages in future.

curl: (35) Unknown SSL protocol error in connection

Monday, November 25th, 2013

Recently we started getting the following error on the Agile India Registration site:

error number: 35 
error message: Unknown SSL protocol error in connection to our_payment_gateway:443

This error occurs when we try to connect to our Payment Gateway using Curl on the server side (PHP.)

By looking at the error message, it occurred to me, that may be, we are not setting the correct SSL protocol, which is supported by our PG server.

Using SSL Lab’s Analyser, I figured out that our PG server only supports SSL Version 3 and TLS Version 1.

Typically, if we don’t specify the SSL version, Curl figures out the supported SSL version and uses that. However to force Curl to use SSL Version 3, I added the following:

curl_setopt($ch, CURLOPT_SSLVERSION, 3);

As expected, it did not make any difference.

The next thing that occurred to me, was may be, the server was picking up a wrong SSL certificate and that might be causing the problem. So I got the SSL certificates from my payment gateway and then starting passing the path to the certificates:

curl_setopt($ch, CURLOPT_CAPATH, PATH_TO_CERT_DIR);

Suddenly, it started working; however not always. Only about 50% of the time.

May be there was some timeout issue, so I added another curl option:

curl_setopt($ch, CURLOPT_TIMEOUT, 0); //Wait forever

And now it started working always. However, I noticed that it was very slow. Something was not right.

Then I started using the curl command line to test things. When I issued the following command:

curl -v https://my.pg.server
* About to connect() to my.pg.server port 443 (#0)
*   Trying 2001:e48:44:4::d0... connected
* Connected to my.pg.server (2001:e48:44:4::d0) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSLv3, TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to my.pg.server:443
* Closing connection #0
curl: (35) Unknown SSL protocol error in connection to my.pg.server:443

I noticed that it was connecting on iPV6 address. I was not sure if our PG server supported iPV6.

Looking at Curl’s man pages, I saw an option to resolve the domain name to IPv4 address. When I tried:

curl -v -4 https://my.pg.server

it worked!

* About to connect() to my.pg.server port 443 (#0)
*   Trying 221.134.101.175... connected
* Connected to my.pg.server (221.134.101.175) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using RC4-MD5
* Server certificate:
* 	 subject: C=IN; ST=Tamilnadu; L=Chennai; O=Name Private Limited; 
OU=Name Private Limited; OU=Terms of use at www.verisign.com/rpa (c)05; CN=my.pg.server
* 	 start date: 2013-08-14 00:00:00 GMT
* 	 expire date: 2015-10-13 23:59:59 GMT
* 	 subjectAltName: my.pg.server matched
* 	 issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; 
OU=Terms of use at https://www.verisign.com/rpa (c)10; 
CN=VeriSign Class 3 International Server CA - G3
* 	 SSL certificate verify ok.
> GET / HTTP/1.1
...

Long story short, it turns out that passing -4 or –ipv4 curl option, forces iPV4 usage and this solved the problem.

So I removed everything else and just added the following option and things are back to normal:

curl_setopt($ch, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4);

Setting up Virtual Hosts on Mac OS X

Saturday, March 23rd, 2013

If you are building a web-app, which needs to use OAuth for user authentication across Facebook, Google, Twitter and other social media, testing the app locally, on your development machine, can be a real challenge.

On your local machine, the app URL might look like http://localhost/my_app/login.xxx while in the production environment the URL would be http://my_app.com/login.xxx

Now, when you try to test the OAuth integration, using Facebook (or any other resource server) it will not work locally. Because when you create the facebook app, you need to give the URL where the code will be located. This is different on local and production environment.

So how do you resolve this issue?

One way to resolve this issue is to set up a Virtual Host on your machine, such that your local environment have the same URL as the production code.

To achieve this, following the 4 simple steps:

1. Map your domain name to your local IP address
Add the following line to /etc/hosts file
127.0.0.1 my_app.com

Now when you request for http://my_app.com in your browser, it will direct the request to your local machine.

2. Activate virtual hosts in apache

Uncomment the following line (remove the #) in /private/etc/apache2/httpd.conf

#Include /private/etc/apache2/extra/httpd-vhosts.conf

3. Add the virtual host in apache

Add the following VHost entry to the /private/etc/apache2/extra/httpd-vhosts.conf file

<VirtualHost *:80>
    DocumentRoot "/Users/username/Sites/my_app"
    ServerName my_app.com
</VirtualHost>

4. Restart Apache
System preferences > “Sharing” > Uncheck the box “Web Sharing” – apache will stop & then check it again – apache will start.

Now, http://my_app.com/login.xxx will be served locally.

Inverting the Testing Pyramid

Tuesday, March 19th, 2013

As more and more companies are moving to the Cloud, they want their latest, greatest software features to be available to their users as quickly as they are built. However there are several issues blocking them from moving ahead.

One key issue is the massive amount of time it takes for someone to certify that the new feature is indeed working as expected and also to assure that the rest of the features will continuing to work. In spite of this long waiting cycle, we still cannot assure that our software will not have any issues. In fact, many times our assumptions about the user’s needs or behavior might itself be wrong. But this long testing cycle only helps us validate that our assumptions works as assumed.

How can we break out of this rut & get thin slices of our features in front of our users to validate our assumptions early?

Most software organizations today suffer from what I call, the “Inverted Testing Pyramid” problem. They spend maximum time and effort manually checking software. Some invest in automation, but mostly building slow, complex, fragile end-to-end GUI test. Very little effort is spent on building a solid foundation of unit & acceptance tests.

This over-investment in end-to-end tests is a slippery slope. Once you start on this path, you end up investing even more time & effort on testing which gives you diminishing returns.

In this session Naresh Jain will explain the key misconceptions that has lead to the inverted testing pyramid approach being massively adopted, main drawbacks of this approach and how to turn your organization around to get the right testing pyramid.

How to upgrade CMS Made Simple from 1.9.x.x to 1.10.x

Monday, July 30th, 2012

Recently I had the “pleasure” of upgrading from CMSMS 1.9.3 to 1.10.3.

  • Downloaded the cmsmadesimple-1.10.3-full.tar.gz
  • Unzipped it overwriting some of the existing files from the older version (1.9.3) [tar -xvf cmsmadesimple-1.10.3-full.tar.gz -C my_existing_site_installation_folder]
  • Ran the upgrade script by opening http://my-site.com/install/upgrade.php

I was constantly getting stuck at step 3, it was complaining:

Fatal error: Call to undefined method cms_config :: save () in /install/lib/classes/CMSUpgradePage3.class.php on line 30

Digging around a little bit realized cms_config is no longer available.

Then tried downloading cmsmadesimple-1.9.4.3-full.tar.gz

Luckily this time I was able to go past step 3 without any problem.

So now I was on version 1.9.4.3, but I wanted to get to 1.10.3. So

  • As per their advice, upgraded all my modules to the latest version
  • Downloaded cmsmadesimple-1.10.3-full.tar.gz,
  • Copied its contents
  • Tried to run the upgrade script.

Everything went fine, it even updated my database scheme to version 35 successfully. But then when I hit continue on step 6, it was stuck there for ever. Eventually came back with Internal Error 500. Looking at the log file, all I could see is

“2012/07/28 06:28:35 [error] 23816#0: *3319000 upstream timed out (110: Connection timed out) while reading response header from upstream”

Turns out that in 1.10, the CMSMS dev team broke a whole bunch of backward compatibility. In Step 6 of the upgrade, it tries to upgrade and install installed modules. But during this process it just conks out.

Then I tried to uninstall all my modules and run the upgrade script. Abra-kadabra the upgrade went just fine.

  • Then I had to go in and install those modules again.
  • Also had to update most of the modules to the latest version which is compatible with 1.10.
  • And restore the data used by the modules.

Only had I known all of this, it could have saved me a few hours of my precious life.

P.S: Just when I finished all of this, I saw the CMSMS dev team released the latest stable version 1.11

Various Prefixes for Ngxin’s Location Directive

Thursday, November 3rd, 2011

Often we need to create short, more expressive URLs. If you are using Nginx as a reverse proxy, one easy way to create short URLs is to define different locations under the respective server directive and then do a permanent rewrite to the actual URL in the Nginx conf file as follows:

http { 
    ....
    server {
        listen          80;
        server_name     www.agilefaqs.com agilefaqs.com;
        server_name_in_redirect on;
        port_in_redirect        on; 
 
        location ^~ /training {
            rewrite ^ http://agilefaqs.com/a/long/url/$uri permanent;  
        }
 
        location ^~ /coaching {
            rewrite ^ http://agilecoach.in$uri permanent;  
        }
 
        location = /blog {
            rewrite ^ http://blogs.agilefaqs.com/show?action=posts permanent;  
        }
 
        location / {
            root   /path/to/static/web/pages;
            index   index.html; 
        }
 
        location ~* ^.+\.(gif|jpg|jpeg|png|css|js)$ {
            add_header Cache-Control public;
            expires max;
            root   /path/to/static/content;
        }
    } 
}

I’ve been using this feature of Nginx for over 2 years, but never actually fully understood the different prefixes for the location directive.

If you check Nginx’s documentation for the syntax of the location directive, you’ll see:

location [=|~|~*|^~|@] /uri/ { ... }

The URI can be a literal string or a regular expression (regexp).

For regexps, there are two prefixes:

  • “~” for case sensitive matching
  • “~*” for case insensitive matching

If we have a list of locations using regexps, Nginx checks each location in the order its defined in the configuration file. The first regexp to match the requested url will stop the search. If no regexp matches are found, then it uses the longest matching literal string.

For example, if we have the following locations:

location ~* /.*php$ {
   rewrite ^ http://content.agilefaqs.com$uri permanent; 
}
 
location ~ /.*blogs.* {
    rewrite ^ http://blogs.agilefaqs.com$uri permanent;    
}  
 
location /blogsin {
    rewrite ^ http://agilecoach.in/blog$uri permanent;    
} 
 
location /blogsinphp {
    root   /path/to/static/web/pages;
    index   index.html; 
}

If the requested URL is http://agilefaqs.com/blogs/index.php, Nginx will permanently redirect the request to http://content.agilefaqs.com/blogs/index.php. Even though both regexps (/.*php$ and /.*blogs.*) match the requested URL, the first satisfying regexp (/.*php$) is picked and the search is terminated.

However let’s say the requested URL was http://agilefaqs.com/blogsinphp, Nginx will first consider /blogsin location and then /blogsinphp location. If there were more literal string locations, it would consider them as well. In this case, regexp locations would be skipped since /blogsinphp is the longest matching literal string.

If you want to slightly speed up this process, you should use the “=” prefix. .i.e.

location = /blogsinphp {
    root   /path/to/static/web/pages;
    index   index.html; 
}

and move this location right at the top of other locations. By doing so, Nginx will first look at this location, if its an exact literal string match, it would stop right there without looking at any other location directives.

However note that if http://agilefaqs.com/my/blogsinphp is requested, none of the literal strings will match and hence the first regexp (/.*php$) would be picked up instead of the string literal.

And if http://agilefaqs.com/blogsinphp/my is requested, again, none of the literal strings will match and hence the first matching regexp (/.*blogs.*) is selected.

What if you don’t know the exact string literal, but you want to avoid checking all the regexps?

We can achieve this by using the “^~” prefix as follows:

location = /blogsin {
    rewrite ^ http://agilecoach.in/blog$uri permanent;    
}
 
location ^~ /blogsinphp {
    root   /path/to/static/web/pages;
    index   index.html; 
}
 
location ~* /.*php$ {
   rewrite ^ http://content.agilefaqs.com$uri permanent; 
}
 
location ~ /.*blogs.* {
    rewrite ^ http://blogs.agilefaqs.com$uri permanent;    
}

Now when we request http://agilefaqs.com/blogsinphp/my, Nginx checks the first location (= /blogsin), /blogsinphp/my is not an exact match. It then looks at (^~ /blogsinphp), its not an exact match, however since we’ve used ^~ prefix, this location is selected by discarding all the remaining regexp locations.

However if http://agilefaqs.com/blogsin is requested, Nginx will permanently redirect the request to http://agilecoach.in/blog/blogsin even without considering any other locations.

To summarize:

  1. Search stops if location with “=” prefix has an exact matching literal string.
  2. All remaining literal string locations are matched. If the location uses “^~” prefix, then regexp locations are not searched. The longest matching location with “^~” prefix is used.
  3. Regexp locations are matched in the order they are defined in the configuration file. Search stops on first matching regexp.
  4. If none of the regexp matches, the longest matching literal string location is used.

Even though the order of the literal string locations don’t matter, its generally a good practice to declare the locations in the following order:

  1. start with all the “=” prefix,
  2. followed by “^~” prefix,
  3. then all the literal string locations
  4. finally all the regexp locations (since the order matters, place them with the most likely ones first)

BTW adding a break directive inside any of the location directives has not effect.

Continuous Deployment Demystified – Agile India 2012 Proposal

Tuesday, November 1st, 2011

“Release Early, Release Often” is a proven mantra, but what happens when you push this practice to it’s limits? .i.e. deploying latest code changes to the production servers every time a developer checks-in code?

At Industrial Logic, developers are deploying code dozens of times a day, rapidly responding to their customers and reducing their “code inventory”.

This talk will demonstrate our approach, deployment architecture, tools and culture needed for CD and how at Industrial Logic, we gradually got there.

Process/Mechanics

This will be a 60 mins interactive talk with a demo. Also has a small group activity as an icebreaker.

Key takeaway: When we started about 2 years ago, it felt like it was a huge step to achieve CD. Almost a all or nothing. Over the next 6 months we were able to break down the problem and achieve CD in baby steps. I think that approach we took to CD is a key take away from this session.

Talk Outline

  1. Context Setting: Need for Continuous Integration (3 mins)
  2. Next steps to CI (2 mins)
  3. Intro to Continuous Deployment (5 mins)
  4. Demo of CD at Freeset (for Content Delivery on Web) (10 mins) – a quick, live walk thru of how the deployment and servers are set up
  5. Benefits of CD (5 mins)
  6. Demo of CD for Industrial Logic’s eLearning (15 mins) – a detailed walk thru of our evolution and live demo of the steps that take place during our CD process
  7. Zero Downtime deployment (10 mins)
  8. CD’s Impact on Team Culture (5 mins)
  9. Q&A (5 mins)

Target Audience

  • CTO
  • Architect
  • Tech Lead
  • Developers
  • Operations

Context

Industrial Logic’s eLearning context? number of changes, developers, customers , etc…?

Industrial Logic’s eLearning has rich multi-media interactive content delivered over the web. Our eLearning modules (called Albums) has pictures & text, videos, quizes, programming exercises (labs) in 5 different programming languages, packing system to validate & produce the labs, plugins for different IDEs on different platforms to record programming sessions, analysis engine to score student’s lab work in different languages, commenting system, reporting system to generate different kind of student reports, etc.

We have 2 kinds of changes, eLearning platform changes (requires updating code or configuration) or content changes (either code or any other multi-media changes.) This is managed by 5 distributed contributors.

On an average we’ve seen about 12 check-ins per day.

Our customers are developers, managers and L&D teams from companies like Google, GE Energy, HP, EMC, Philips, and many other fortune 100 companies. Our customers have very high expectations from our side. We have to demonstrate what we preach.

Learning outcomes

  • General Architectural considerations for CD
  • Tools and Cultural change required to embrace CD
  • How to achieve Zero-downtime deploys (including databases)
  • How to slice work (stories) such that something is deployable and usable very early on
  • How to build different visibility levels such that new/experimental features are only visible to subset of users
  • What Delivery tests do
  • You should walk away with some good ideas of how your company can practice CD

Slides from Previous Talks

Locked Yourself Out? Rescue your IP from CSF’s Temporary Blacklist

Sunday, October 9th, 2011

We have a few Red Hat Enterprise Linux servers, all run ConfigServer and Security (CSF), which is a Stateful Packet Inspection (SPI) firewall, Login/Intrusion Detection and Security application for Linux servers. Amongst various other things, it looks for port scans, multiple login failures and other things that it thinks are ominous, and locks out the originating IP address by rewriting the iptables firewall rules.

For example, if you try to connect to the same server via http, https, ssh and svn within some short window of time, you are quite likely to incur its wrath. Developers at Industrial Logic often lock themselves out by getting blacklisted.

Generally when this happens, we ssh into one of our other server, connect to the server that has blacklisted us, and execute the following command to see what is going on:

$ sudo /usr/sbin/csf -t

A/D IP address Port Dir Time To Live Comment
DENY 117.193.150.62 * in 9m 58s lfd – *Port Scan* detected from 117.193.150.62 (IN/India/-). 11 hits in the last 36 seconds

As you can see, csf blacklisted my IP for port scanning.

If your IP is the only record, you can flush the whole temporary block list by executing:

$ sudo /usr/sbin/csf -tf
DROP all opt — in !lo out * 117.193.150.62 -> 0.0.0.0/0
csf: 117.193.150.62 temporary block removed
csf: There are no temporary IP allows

Alternatively you can execute the following command to just remove a specific IP:

$ sudo /usr/sbin/csf -tr

The easiest way to find your (external) IP address is to visit http://www.whatsmyip.org/

If you have a static IP, then you can whitelist yourself by:

$ sudo /usr/sbin/csf -a

    Licensed under
Creative Commons License