Dev: Flask Intro 1

Recently, I attended a Python Meetup. The first presenter was Carter Rabasa from Twilio. He gave two short demos of Flask. From the site:

Flask is a microframework for Python based on Werkzeug, Jinja 2 and good intentions.

More specifically, it’s a micro web framework.
The alternative would be Django, a full server side MVC web framework.

This motivated me to run out and buy the book, “Flask Web Development” by Miguel Grinberg. This blog series will follow my experiments between the book, external sources like Carter’s open source code, and anything else I might fancy. My bigger motivation here is to see if Flask can help shrink the code footprint of a standard REST interface. I have done these in C# and while I appreciate the larger frameworks, I often find their a bit overkill for my needs. Particularly with a current trend in business shifting us towards Micro-Services. This slide from the presentation sort of explain what I mean:
Snip20141124_1

First thing first: Running Flask on your dev machine.

Setting up your Virtual Environment on OS X:


sudo easy_install virtualenv
sudo mkdir flask_test
cd flask_test
virtualenv venv
source venv/bin/activate
(venv) pip install flask

That installs VirtualEnv. From the site:

virtualenv is a tool to create isolated Python environments.

Best to keep it clean. As an aside; if you need more than this, you might look to Vagrant.
Then you make a directory, startup a virtual environment, start it, and finally install flask.
You’re now ready to write the obligatory hello world:

Open an text editor (like SublimeText) and let’s put this code in:


from flask import Flask
app = Flask(__name__)

@app.route('/')
def index():
  return '<h1>Heyas world</h1>'

@app.route('/user/<name>')
def user(name):
  return '<h1>Heyas, %s.</h1>' % name

if __name__ == '__main__':
  app.run(debug=True)

Save that as flask01.py.

So what’s happening here:
Importing the Flask library and creating an app. If you’re familiar with Express, it’s similar.
Then you define two routes. First to take handle addresses to the base URL; Second to show dynamic routes.
Then we run the app.

Back in your terminal:


(venv) python flask01.py

That should run your site:
Snip20141124_4

Snip20141124_5

Snip20141124_6

With the debugging you should be able to watch some logging happen of HTTP error codes as you try out your site:
Snip20141124_7

Okay there’s step 1.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Dev: Continuous Integration with Jenkins in a mixed Linux and Microsoft Environment

Explanation
Jenkins is an open source continuous integration server. It boasts 929 plugins to handle about any sort of bizarre requirement you can throw at a project. In most cases, setting up CI or build servers is a tedious but incredibly important part of any environment.

Why Setup Continuous Integration
If your career has remained in small shops, you may not realize how easy something like this can make your life. If you put a little investment into setting up CI, you can gain quite a lot of time and peace of mind back on your system’s environments.
For me there are two big wins here:

  • First: Continuous Integration setup means you can reasonably disconnect your developers from your deployments. In small shops this in interpreted as a loss of control, however I sell this to developers as a risk management technique. No developer wants to be held accountable for breaking production at 2am. Automate this process, hand the keys over to the people who want to wake up at 2am, and back away slowly from the smoking gun.
  • Second: Continuous Integration is a positive feedback loop for good project maintenance. Once you have nightly builds configured, your developers will quickly learn not to check in broken code, and will use peer pressure to ridicule any developers who “break” a QA environment because they were too lazy to make sure their code built. Also, it’s nice to configure test cases to run before building, so, the “breaks” should be caught. If they’re not caught and something breaks, you know you need more tests.

Jenkins on Linux
In most cases, you’re going to find examples of Jenkins being installed on a linux server and being configured through it’s administrative website. If you’re deploying to all linux servers, life is easy. However, if you have even one server that requires any .Net compilation… well, life is not easy. You need this MSBuild plugin that needs the MSBuild dll. Surprisingly, Microsoft does not actually make a linux distribution of this tool (haha). If you rolled Jenkins on Debian or CentOS, well, you’re in a sticky place where you have to rely on WINE or MONO to hopefully execute a Win DLL. While this is a cute technical challenge, it’s also a waste of time in most cases that adds nothing to your project but hours and maybe a few stack exchange points.

Jenkins on Windows
If you run Jenkins on Windows, then there really are no technical challenges. Deploying to linux and windows systems is now doable with standard plug-ins.
With three plugins you can integrate a Git repository and deploy to linux and windows servers:

  1. Git Plugin: https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin
  2. Publish Over SSH Plugin: https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin
  3. MSBuild Plugin: https://wiki.jenkins-ci.org/display/JENKINS/MSBuild+Plugin

Installation on Windows.
Jenkins is written in Java, so you will need the Java Runtime Executable installed on your server.
JRE: http://www.oracle.com/technetwork/java/javase/downloads/jre7-downloads-1880261.html

You’ll also need to install Git on the server so Jenkins can use it.
Git: http://git-scm.com/downloads

Get the Jenkins Windows Installer: http://jenkins-ci.org/
Outside of the plugins, there’s very few configurations you have to make. Go to Configure Security:
Click “enable security”
select “Jenkins own user database” as realm
select “matrix-based security”
and create an account.
Then add in whatever plugins you need.

There are a few “gotchas” to avoid frustration:

Regarding MSBuild
Jenkins MSBuild plugin requires .NET framework 4.5 to be installed if the Visual studio project is dependent on it (http://msdn.microsoft.com/en-us/windows/hardware/hh852363.aspx)

Regarding Git
The initial path is wrong. In “Manage Jenkins–>Configure System”. Change it to “C:\Program Files (x86)\Git\cmd”
Be careful when you setup your Git credentials on your job. Jenkins will automatically try your credentials without asking over and over. If you typo’d it, your account will get locked out.

Regarding Publish Over SSH
Server Setup is here: Jenkins->Manage Jenkins->Configure System
This is where you configure and add servers. These will populate the server drop down when you are creating a new job.

Summary
Just to restate; the point is not that a Jenkins install on Linux can-NOT handle running the MSBuild.dll through WINE or MONO.
The point is that going through this exercise is not always mission critical and that Jenkins can easily run on a Windows machine and handle deployments to all machine types now without the extra time spent setting up the above.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Dev: Enterprise Backbone.js Project Setup

Today I want to discuss the how’s and why’s of a Backbone.JS (http://backbonejs.org/) implementation.

Ultimately, doing client side MVC, you come to face to face with Google’s popular Angular framework. Many people will question you if you use anything but Angular. Another architect I met summed it up well though. He said that if you have to teach a team Angular, it takes some time, because you have to learn the “Angular” way of doing things. using Backbone, it’s straight Model View Controller setup. There’s really nothing new in there to learn, and I’m a big fan of eliminating learning curves for teams when possible.

So, that’s as good a reason as any. Now, once you decided to go down the Backbone road, there’s a lot of opinionated decisions you have to make. Ironic, since Backbone is supposed to be “opinionated” according to some definitions. Personally I think there were still too many choices left up to the developer… ergo pitfalls.

Folder Structure
The first thing you have to decide is how you want to setup your project.
If you’re smart, you want to do something that will make sense to programmers that come later.
Here is what we ended up using:


[public]
     |-- [scss]
     |-- [img]
     |-- [vendor] (where you stuff 3rd party plugins)
     |-- [js]
         |-- [lib] (external scripts: handlebars, foundations, i18next, etc)
         |-- [collections] (your backbone extended collections)
         |-- [models] (your backbone extended models)
         |-- [templates] (handlebars templates for programmatic views)
         |-- [views] (your backbone extended views)
         |-- app.js (configure your backbone app)
         |-- main.js
         |-- require-config.js (setup require.js dependencies)
         |-- router.js (setup your routes)
     |-- [locales]
     |-- [tests]
     |-- Gruntfile.js (builds the project)
     |-- package.json (contains all the packages...)
   |-- server.js (contains the Node.js website configuration)
   |-- build.js (optimizes javascript files)
     |-- 404.html
     |-- favicon.ico
     |-- index.html

This came from Mathew LeBlanc’s excellent Backbone.js course on Lynda.com. Six months ago, I tried to follow Addy Osmani’s Backbone book, only to find it’s source code was out of date and didn’t work. There are some good ideas in it, but, the code’s out of date.

Here’s a short description of the 3rd party scripts we started with:

Style Sheets
Foundation handles the mobile-first design.
For our own CSS organization, we follow the SMACSS (Scalable and Modular Architecture for CSS – http://smacss.com/) approach. On top of that, we’re using SASS with Compass which generates the .scss files.

Scalable and Modular Architecture for CSS

SASS and LESS are popular CSS authoring frameworks (Foundation is built on SASS, so, it aligns better than LESS for us). That means they manage and compile CSS code for your project. To get the build/compile functionality, we’re using Compass, which is written in Ruby, so you need Ruby installed for it to work.

The folder structure ends up looking like this:


[public]
     |-- [css]
         |-- main.css (generated)
     |-- [scss]
         |-- [base]
         |-- [layout]
         |-- [module]
         |-- [state]
         |-- [theme]
         |-- _mixins.scss
         |-- _variables.scss
         |-- main.scss

Now let’s explain that a bit.
The base structure comes from SMACSS.
main.scss: this is automatically generated via imports from other scss files. This is the compilation process, and thus the only CSS file that will be minified.
_mixins.scss: Custom mixins.
_variables.scss: Contains scss variables (colors/fonts).

You define styles in base, layout, module, state, and theme.
So if you create a Backbone View called “medical-survey.js” then the handlebars template would be “medical-survey.html” and the style sheet would be “medical-survey.scss”

On small projects, you might think that’s annoying, but on a large scale project, this is very handy. It allows us to chunk out work, while still compiling it all into a single, minified styles sheet at the end.

The important part of SASS comes with this setup line of code:


sass --watch scss:css --style compressed

This tells the SASS (running via Ruby/Compass) to watch the folders under scss. SASS will find all the .scss files and import them into a single main.scss file. This file will be compiled into the final main.css. This file *will* come out very large. Even with the white space stripped out, it will often be huge and can slow down your site unacceptably…

Minification and Compression
So, if you implemented Require.js in the project to handle the efficient loading of Javascript files. What do you do about the massive CSS file you just generated using SASS above? If you are using Node.js, there’s a solution in the middleware. Compression.

The website discussing it is here:
https://github.com/expressjs/compression

This will use gzip to decrease the file size when loading the website.

There’s a lot of thought, effort, and detail complete missing from this blog. But if you have any questions, please let me know. My goal is always to help teams use simple and tested methods, but when we use newer techniques, we’re faced with not being able to get the best information about using it. This is definitely a major caveat to using technology developed in the last 3-5 years. We often do it anyway because new techniques are usually developed to resolve an annoyance we’ve been dealing with. Node.js resolves the annoyance of web servers like IIS and Apache being a bit slower and complex than we would like. It also adapts well in scalable scenarios.

Likewise, I don’t see any reasonable way to avoid the modern problem of having ridiculous amounts of Javascript and CSS. It’s just the modern problem. We don’t want to take the time hit to a project (and should not) to reinvent the wheel by rewriting things other developers have done via JQuery and other handy tools.
I have been coding CSS since 1998 and what began as a tool to *simplify* applying global (and thus branded) styles has grown into a monster; albeit a highly useful one, the most developers simply won’t take the time to really understand. SASS and LESS are simply ways for these developers to deal with the massive amount of CSS to compile it down into at least one file. And Compression makes it manageable. But it doesn’t solve the root problem:
The CSS cat has gotten out of the bag and no one is quite sure hot to get it back in.

The other root problem is unsolvable: the companies building the browsers will never follow a standard unless it’s the “standard” they made. Just accept it and you’ll be happier.

Posted in Uncategorized | Tagged , , , , | 1 Comment

Setup: MariaDB 10.0 on Azure

Back again with some more notes.

Opinion and Motivation

MS SQL is a popular tool, but in has a big hurdle in cloud adoption: pricing.
The obfuscation of pricing in Azure hurts. When a client asks how much it will cost to run a MS SQL cluster; no one can answer this very well. It’s sort of like asking Azure billing support how much running Sharepoint in Azure would cost (this is an inside joke; most of the Azure billing support reps don’t know about the Office365 cloud).
When you finally get an answer, well, it’s quite expensive. You’re paying for the Azure DB or the VM running SQL 2012; you’re paying for licenses, cores, compute time, huh? Enterprise shops are not immune to budget cuts. If you’re simultaneously being asked to migrate to the cloud and cut expenses… remember this post.

Solution: Use another popular relational database: MySQL
MySQL is the famed open source database.

Problem 1: Oracle bought it. If you don’t get why that’s a problem; you’re probably an executive. Congrats.
Problem 2: The original developer left and built a better version.

New Solution: Use the less popular open source relational database: MariaDB
Never heard of it? That’s okay. This post explains it pretty well:

Okay, so, that’s why, now let’s focus on the how-to.
If you followed my last blog, you’ve got a Virtual Machine in the Azure Cloud running Ubuntu 13.10. So what we’re going to do now is:

  • Install MariaDB 10.0
  • Connect to it via MySQL Workbench from my local desktopn
  • Create a test database
  • Go back to the Azure VM and prove it worked.

Install MariaDB

Here is the official website:

Open Putty and connect to your Azure VM running Ubuntu.
Issue these commands:


sudo apt-get install software-properties-common
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
sudo add-apt-repository 'deb http://download.nus.edu.sg/mirror/mariadb/repo/10.0/ubuntu saucy main'

That sets up the key and repository. Now you can install it.


sudo apt-get update
sudo apt-get install mariadb-server

This takes you to the following setup screen for MariaDB:
azure-ubuntu-mariadb

Verify MariaDB Works & Nice To Know’s

My source for this is the documentation:

To connect on your Azure VM to your MariaDB installation:


mysql -u root -p -h localhost

Here’s what you should see:
mariadb-login

Great, now let’s just take it a little further so we have something to work with.
Enter the following commands into MariaDB to create a test database and look around.


CREATE DATABASE IF NOT EXISTS test;
SHOW DATABASES;

Okay, so, it’s installed, alive, and working properly.

If you want to connect remotely to your MariaDB/MySQL database, be sure to:
Create an ENDPOINT on the Azure Virtual Machine
Go to your Azure Portal, click on Virtual Machines in the left navigation, and select your test VM.
Go to the EndPoints tab in the main portion of the portal.
At the bottom center, select Add (which infers you want to add an endpoint).
Select MySQL from the Endpoint drop down and it should look like this:
azure-endpoint-mysql

In a near future post I will describe how to connect to your MariaDB via MySQLWorkBench

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Setup: Azure, Ubuntu 13.10, Node.js, Express.js

In my last post, I covered how you would setup a Virtual Box virtual machine on a Windows host for a Ubuntu 13.10 client, running as a web server with Node.js.

This post will take it to the next level: the cloud. Microsoft’s Azure cloud specifically. We will spin up a virtual machine in Azure with an Ubuntu image. From what I’ve seen, the Azure team has put a lot of effort into their Ubuntu images in the gallery. They’re pretty nice.

The next step will be to setup an SSH connection using Putty. Putty tends to be the most popular way to connect to remote servers that only have a command line. It’s not very exciting from user interface perspective, but, these are web servers, not gaming machines.

After the Putty install, certificate key generation, and connection to the server, the rest will be the usual install with some additional information I learned to make the process a bit better. Specifically, I’ll focus on setting up Express.js on top of Node.js. Express.js is a server side MVC framework that runs on top of Node.js and makes handling requests just a little bit easier.

Okay, on with the walk through.

The Azure Portal part

Log into your Azure Portal using your Windows Live Login ID.
On the left navigation of the Azure portal, click the Virtual Machines.
azure-virtual-machines

Then in the bottom left, there will be a “+New” button. Click that to begin creating your new virtual machine.
Pick through the choices like in this image:
azure-virtual-machine-gallery

Choose the Ubuntu 13.10:
azure-virtual-machine-gallery-ubuntu

This will take you the first server setup page. Before you can fill this page out though, you need a very important thing: An Azure Compatible Key
So, open a new browser, and download a program that generates SSL keys.

If you want to know how to get an SSL cert for your Windows Azure account, please follow the instructions here:
http://www.windowsazure.com/en-us/documentation/articles/linux-use-ssh-key/
It can be a pain, but eventually you’ll end up with a .pem file that you enter on the Azure VM page.

Fill out the rest of the information: server name, size (small), username, password along with your .pem cert file.
This takes you to the second screen of the VM setup, where you can choose your DNS name, storage account, and regional affinity.
Choose a region closest to you or your client physically

The final screen lets you configure your End Points. You want a web server and you want to control it through Putty, so you need three: HTTP, HTTPS, and SSH. Just select them in the drop down and Azure does the work for you:
azure-virtual-machine-endpoints

Create the server and watch it spin for a few minutes. Now that part is done. Next we will setup Putty. If you are a Windows Developer, you may never have used a Telnet or SSH client before. You may feel like you’ve gone back in history 20,000 years to witness the awesome power of a server without a user interface. And you may be surprised to know that this is exactly why Unix admins made jokes behind your back. Well, now it’s time to put that behind you.

The Putty part

Okay, so the first thing you need is not putty, it’s the puttygen program. It can be downloaded here: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html.
You might be thinking, why is that URL not something like www.putty.com or some more legit sounding domain. Suffice to say, the developer doesn’t care.

On that page, look for the binaries section and download puttygen and putty
puttygen-download

The Ubuntu part

Okay, so you’ve got your VM, you’ve installed and configured Putty, and you’re connected to the VM. Here’s the command line steps to get a minimal install setup:


sudo apt-get update
sudo apt-get install -y python-software-properties python g++ make
sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install g++ curl libssl-dev apache2-utils
sudo apt-get install git-core
sudo apt-get install nodejs
sudo npm install express
sudo nano package.json

Copy these lines into your window, then hit CTRL-X, Y, Enter. This file sets up the dependencies for the project. In this case, we’re just including express. After this, when we do the sudo npm install, that command will run through the package.json file and install any libraries under the dependencies part. If you have them mistyped… well it won’t work.


{
  "name": "a-test-app-no-spaces",
  "description": "spaces are okay here",
  "version": "0.0.1",
  "private": true,
  "dependencies": {
    "express": "3.x"
  }
}

sudo nano server.js


var path   =   require("path");
var fs    =  require("fs");
var http  =  require("http");
var express  =  require("express");
var app  =  express();

//capture logs in the console
app.use(express.logger("dev"));

//serve static files - blank in the quotes means server.js is in the same folder as your HTML files.
app.use(express.static(__dirname + ''));

//404 error
app.use(function(req, res, next) {
  res.send(404, "file not found");
});

//start server
app.listen(80);
console.log("listening on port 80");

sudo npm install
sudo node server.js

So, after all this, you can open a browser to the domain name of your server, and you should see the 404 error message, since you have no HTML files:
azure-express-web-server

And if you are watching your Putty session, you should see the activity logged by Express.js like this:
azure-putty-express

And there you have it.
So now, we’ve covered how to setup this web server in both Virtual Box and Windows Azure.
Thanks for checking in.

Posted in Uncategorized | Tagged , , , , , , , , , | 3 Comments

Setup: VirtualBox 4.3.6, Ubuntu 13.10 CLI, Node.js, Forever.js

In my last post I covered a popular implementation of BackboneJS in Visual Studio.
[http://michaeldukehall.com/visual-studio-2013-single-page-application-with-backbonejs/]

I was not happy with it. Primarily because mixing a JavaScript Client-Side MV* solution with a ASP.Net Server-Side MVC solution feels clunky.
So, I’ve set out to do a clean Backbone setup. Which led me to the desire for a clean Web Server, which led me to NginX, and then to Node.
So, this post is a walkthrough of setting up a Node Web Server in Virtual Box.

The Steps:

  1. Install Virtual Box
  2. Create a Virtual Machine with Ubuntu 13.10 64bit CLI iso
  3. Install Guess Additions for your VBox version
  4. Install Node
  5. Install Express
  6. Install Forever

For brevity, I will skip the Virtual Box and Guest Additions steps as I’ve already covered this before.
The ISO for Ubuntu 13.10.CLI is here:
http://releases.ubuntu.com/saucy/
For this walkthrough, ensure you get the Server version: 64-bit PC (AMD64) server install image

In the settings for the VM, you can do what you like or use what I did:

  • RAM: 4gb
  • Select “create a virtual hard drive now”
  • choose VDI
  • Use dynamically allocated
  • Give it 20GB
  • Go to “settings”
  • Select General –> Advance
  • Enable shared clipboard and drag ‘n’ drop
  • Uncheck the mini toolbar
  • Go to storage
  • Click the empty disk under controller: IDE
  • Click the little disk icon with a down arrow and select your Ubuntu server iso.
  • Networking:
  • Leave as NAT unless you know how to setup a bridged connection on your computer

ERROR ALERT: If you get the error: VT-x is disabled in the BIOS.
FIX:

  • Reboot your machine into BIOS mode.
  • Find the Virtualization Setting
  • Enable it (sometimes there are two)
  • Save and Exit
  • Error should be fixed

Start the server and it will go into the Ubuntu install screens

ERROR ALERT: If you get the error: This kernel requires an x86 CPU, but only detected i686 CPU.
FIX: Close the machine, open settings, and change your OS to the Ubuntu/Linux 64 bit version.

Setup your username and password and jot that down for later.

When you get to the Software Selection screen; choose LAMP or SSH.
Azure Note: When you setup Ubuntu on a VM in Azure, this choice is made for you; SSH I believe.

Once complete, a command line interface will come back and ask you for the server login.
Enter your username and password

INSTALLING NODE
There are many, many ways to install Node on Unix. If you’re searching the intertubes, you will find a few different scenarios:

  • Installing from source on git. I don’t suggest this route
  • Installing through apt-get; this is a nice and easy route
  • Installing through NVM; this is also nice and easy
  • Installing straight from your Ubuntu; also nice and easy

There’s no “right way” so I will cover from git and from apt-get.
If you watch the Lynda.com training on Node; Joseph LeBlanc uses the NVM so he can quickly switch out which version of Node he is using.

Here is how to install it from GitHub…
First, install dependencies:


  sudo apt-get install g++ curl libssl-dev apache2-utils
  sudo apt-get install git-core
  sudo apt-get install make
  Then install:
  git clone git://github.com/joyent/node
  cd node
  ./configure
  make
  sudo make install

Here is how to install it from apt-get

sudo apt-get install nodejs

HELLO WORLD:
cd node/
sudo nano hello_world.js


var http=require('http');

http.createServer(function (req,res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello Node\n');
}).listen(8080, "127.0.0.1");

console.log('server running at port http://localhost:8080');

What you’ve done here is very rudimentary, but core.
You’ve bought in a core Node module called HTTP and set it to a local variable.
Then you used that local variable to access the “createServer” function to start a basic HTTP server.
Then you configured the “res”-response variable of that server to write out an HTTP 200 with a content type of text (not HTML).
Then you tell it to “listen” on port 8080 of your localhost.
And finally you write to the console where it should be running.

Save and exit.
Now use the “node” command to run your script.

sudo node hello_world.js

At this point you can open a browser to http://localhost:8080 and see your message.

This brings you to the first challenge of Node: How do you make it run as a service?
There are several ways, I will cover the Forever solution.
https://github.com/nodejitsu/forever

INSTALL FOREVER

sudo npm install forever -g

This will use the Node Package Manager to install forever (-g makes it a global installation, so it’s usable at the command line).

Once complete:

sudo forever start hello_world.js

Now your node server is running as a service and if it dies, Forever will restart it.

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Dev: Visual Studio 2013 Single Page Application with BackBoneJS

Introduction:
As part of my exploration from the last blog post I have been digging into BackBoneJS. Here I take a look at getting started with BackBoneJS in a Microsoft environment. Ultimately, I don’t think this is a very clean solution, so I’ll follow up with another that’s not integrated with ASP.Net’s MVC.

There’s a few requirements for this post.

Our goals with this website is to get a basic MVC website up and running using the BackBoneJS framework.
You can learn more about BackBoneJS here: http://backbonejs.org/
So, once you’ve got Visual Studio installed and running, and the BackBoneJS template installed, go ahead and create a new Visual C#  Web ASP.NET Web Application. It should look like this:
vs2013-spa-backbonejs-01
This will give you a new window of options; choose the Single Page Application.
vs2013-spa-backbonejs-02
Okay, let that build the solution. If you want to see what it does right off, run it with F5.
This theme uses the popular Bootstrap theme (CSS3) to achieve a “responsive” look and feel. Responsive simply means the website will attempt to mold itself to whatever screen size your users browse the site with. Be that a tiny screen through a smartphone or a big screen through a desktop computer. This concept can save you a lot of development time down the road when clients ask for a version of your site to work on their iPad. Responsive is better, in my opinion, than a mobile version of a website. This comic attempts to explain precisely why: http://xkcd.com/869/

You can learn more about Bootstrap at their website: http://getbootstrap.com/

We’re using Bootstrap with this theme automatically, but I don’t want to use the default Bootstrap theme. It’s unoriginal and sort of lazy to use the default theme. So, I’ll go to a website that offers other themes that work with Bootstrap: http://bootswatch.com/ and download the “Slate” theme. Save the “bootstrap.css” and “bootstrap.min.css” to your projects “Content” folder. This will overwrite the defaults that came with the project.

Centering images in the JumboTron

Personally, I’m going for a pretty simple page here. A centered logo at the top, followed by some page content with images. For the “header” section of a web page, Bootstrap delivers JumboTron. In their words, “A lightweight, flexible component that can optionally extend the entire viewport to showcase key content on your site.” You can learn more about the JumboTron on their website: http://getbootstrap.com/components/#jumbotron

What JumboTron does not do out of the box is give you a class to center your image. Developers will waste hours trying to hack the CSS, but, CSS requires finesse, not muscle. Here’s the code that accomplishes a centered image without much fuss:


<div class="jumbotron">
<div class="row well well-lg">
<div class="col-md-6 col-md-offset-3">
             <img src="~/Content/logo_bw_trans.png" alt="header" class="img-responsive text-center" />
        </div>
</div>
</div>

I found this, like almost all code snippets, on stackoverflow: http://stackoverflow.com/questions/18706544/jumbotron-alignment-issues

The Grid System

Bootstrap uses a popular CSS technique for laying out web pages. In bygone years, this was popularized by the creators of CSS frameworks like the 960, http://960.gs/, and BluePrint, http://www.blueprintcss.org/. From my perspective, these CSS frameworks became popular when UI developers realized the middle tier devs weren’t going to take the time to learn CSS and would keep using HTML tables to layout sites. So, they made CSS frameworks to try to help those same devs. Even then it took several years for frameworks like Bootstrap to make it easier. I believe Twitter’s Bootstrap may have grown up from HTML5Boilerplate http://html5boilerplate.com/, but, I don’t know.

The default template starts me off with a 3 section layout, but I only want 2. So, here is what they give us in the template:


<div class="row">
<div class="col-md-4">
        …content…
    </div>
<div class="col-md-4">
        …content…
    </div>
<div class="col-md-4">
        …content…
    </div>
</div>

Without understanding the grid system, you can quickly see there’s some logic to this. The class “col-md-4” seems to have a naming convention to it. It does and it is explained in detail here: http://getbootstrap.com/css/#grid. If you’re guess was that they all add up to 12, then you’re right! I want 2 columns, so mine is reduced to this:

<div class="row">
<div class="col-md-6">
        …content…
    </div>
<div class="col-md-6">
        …content…
    </div>
</div>

Now, I want four rows of content with two columns, so I’ll just copy and paste that a few times and fill in the content. Once that’s done I want a section at the bottom with a big button telling my users what to do. As you are dropping content and images onto the page, you might notice that your images don’t come out the size you made them.

So if we look at this piece of code:


<img src="~/Content/dojo-path.png" alt="header" class="img-responsive text-center" />

You can see the class “img-responsive.” This is one of those magic Bootstrap CSS3 classes that makes your website scale for smartphones or big websites. While you may be tempted to take this off, I advise you leave it and let Bootstrap do what it knows best.
At the end of the page I want an email sign-up form so I can keep in touch with my prospective customers. Email sign up forms are something that almost every website in existence uses, so, there should be very little coding here. But you searched through the Bootstrap website and didn’t find it. Luckily there’s another website, http://bootsnipp.com/, and if you do a quick search on sign up forms, you’ll see there are a few to choose from. I liked this one: http://bootsnipp.com/snippets/featured/sign-up-form.

Well, that’s enough to get your basic functionality so you can wire in some email server. But I’d like to go a bit further.
I already have an account with MailChimp, a popular mailing list website, http://mailchimp.com/, so let’s just see what it takes to wire up a signup form to a mailchimp auto-responder list. So, if you have a mailchimp account, you can get your basic code for a signup form and combine with some of the Bootstrap visual enhancements and end up with code like this:


<!-- Begin MailChimp Signup Form -->
<div id="mc_embed_signup" class="text-center">
<form action="http:/url" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate>
<input type="email" value="" name="EMAIL" class="span6" id="mce-EMAIL" placeholder="email address" required>
                <!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
<div style="position: absolute; left: -5000px;"><input type="text" name="b_31c7d2f366bf7abc8b70e0bf3_64a94b06cb" value=""></div>

                    <button type="submit" id="mc-embedded-subscribe" class="btn btn-default btn-lg">
                        <span class="glyphicon glyphicon-off btn-lg"></span> Subscribe
                    </button>
                
</form>
</div>
<!--End mc_embed_signup-->

This gives you a decent looking sign up like this:
vs2013-spa-backbonejs-03
Which works. And when you hit submit, it opens a new window from mailchimp for the user to confirm their information… which sucks.
What I really want is to use the MailChimp API so you can handle the request from within your application. Since we’re not using WordPress or Drupal, we need to do this with ASP.Net. Unsurprisingly, someone has already done this, and their GitHub project is here: https://github.com/danesparza/MailChimp.NET

So, let’s get to it. We’re going to install this into our project using the Package Manager Console [Tools–Library Package Manager–Package Manager Console] and type: Install-Package MailChimp.NET

That should get you a bunch of successful messages. Next I need my API Key from MailChimp. That’s covered here: http://kb.mailchimp.com/article/where-can-i-find-my-api-key essentially, it’s: primary dashboard–Account Settings–Extras–API Keys

Okay, you’ve imported the MailChimp API, you have your secret API key, now it’s time to go to your Controller and write your function.
Throw these imports into the top of the Controller:


using MailChimp;
using MailChimp.Lists;
using MailChimp.Helper;
Then add a function:
public void SubscribeEmail() {
            MailChimpManager mc = new MailChimpManager(&#8220;YourApiKeyHere-us2&#8221;);
            //  Create the email parameter
            EmailParameter email = new EmailParameter()
            {
                Email = &#8220;customeremail@righthere.com&#8221;
            };
            EmailParameter results = mc.Subscribe(&#8220;YourListID&#8221;, email);
        }

But that will wait till next time.

Posted in Uncategorized | Tagged , , , , , , , , , , | 2 Comments

More Holistic Web Architecture

A lot of architecture on the web discusses the problem from a less than holistic perspective.  With this blog I am attempting to start down a path that answers more than just the “web related” interests with its architecture.  So, it’s friendlier towards reporting, security, and operations teams.  A lot of my success comes from taking applications that were purely “developer centric” and teasing out messy bits to work more transparently for the ops teams and business leaders.

For this, the only real constraints I had were: ASP.Net, RESTful web service layer, and a three data center (global clients) web farm model.

It can be roughly described from the top-down as follows:

web-arch-01

Use NGINX (a light weight web) as a reverse proxy to handle routing to three global web farms by IP address location.  Additional research has raised potentials for inserting more thorough DDOS detection at this layer.  Further research raises the potential for routing all static content from this level, potentially combining Varnish with NGINX, to reduce the number of hops for the user to get to the images and HTML for the site.

web-arch-02

Maintain a User Interface layer using ASP.Net MVC4 combined with a BackboneJS framework along with underscoreJS and JQuery.  Further questions around whether SPA (Single Page Application, like HULU has) is better for you content or not.  Regardless, SPA has a lot of fans these days.  The frameworks seem to boil down to BackboneJS vs. KnockoutJS.  Further research revealed some opinion based leanings toward BackboneJS: it has a larger community of developers (unverified) and has built in hooks for a RESTful web service layer.  There is also a question of what is the best library or popular method to sanitize requests against XSS (cross site scripting) and SQLi (SQL injection).  I find some .Net/Java developers ignore the security layer because they feel safe within their frameworks.  However, I observe modern developers shifting towards faster and more responsive JavaScript libraries, and so, I want to keep an eye on this.  The frameworks only protect you if you use their compilers.

web-arch-03

For the caching part I kept coming across success stories in web farms using Memcached.  Just to keep an eye on MS Azure, at this point, there is some potential interest in Windows Azure Caching (Preview).  However there appears to be a concern since MS Azure Caching in other forms has been cost prohibitive.  Also, as a MS developer, I’m just as concerned when choosing newer MS technologies as open source ones regarding the long term durability (is it maintained? Is there a healthy community).  Memcached apparently does the job well in web farm situations, so, it seems to be a first choice.

web-arch-04

So, the Service layer.  ASP.Net Web API wins over WCF as a light weight RESTful web services that speaks in JSON.  Versioning in the services would be handled through the URI model and operations would be kept minimal to required functionality with the HTTP verbs.  Regarding speed…  I’ve been on both sides of this question: Use a service layer for Web-DB communications vs. regular code layer.  I know theoretically the straight code would be faster in a small app situation.  I know that, despite debating, that Web API would be faster than WCF in many situations.  I know that for any extensibility with external systems would be optimally built in a services fashion.  So, to me, this is less about writing SOA or not, and more about, if I have a team that already has to code out a services layer, why confuse them with internal/external questions.  I like to simplify things as much as possible up front, because I’ve seen many complex architectures fail out of the gate because the devs don’t get it and ultimately have a pressing deadline that takes priority over the purity of the concept.

This is where authentication is going to pass through, so we have Oath 2.0 vs. HMAC.  The traditional way is to do authentication over HTTPS encryption, but, that’s only encrypted over the wire and not at the end points which opens the application up to Man in the Middle attacks.  Research showed that Amazon, at some point, avoided this by not using OAuth and instead used HMAC.  Others did Two legged OAuth.  Regardless, caution needs to be taken here to choose a method that actually works before I start code.  The thought of implementing an unsecure authentication method out of ignorance is, in my mind, a pretty avoidable problem.

web-arch-05

The data access code …  In fifteen years I’ve seen a lot of paths taken here.  Some of them were light and painless, but regarded by some architects as distinctly “un-MS.”  Personally, MS doesn’t pay me, so I have no loyalty to their lollipop data access flavors.  I have seen and used Entity Framework since its inception, and I pretty much find it a great example of a “ivory tower” concept that fails to live up to expectations in the real world.  I don’t need a DAL layer that knows how to talk to SQL, MySQL, Oracle, etc…  I never really have either.  Even in huge applications where mainframes were still in production this would not have helped.  Someone had already build that layer.  So, at this point I’d prefer a super simple layer with code minimized and tailored to the one database I have in production.  If down the road a merger took place and I ended up with 2 databases, I’d cross that bridge than rather than gimp a solution for things that “may occur.”  So, custom ADO.Net or an ORM or both.

Using ADO.Net to build the communications to a database usually means that SQLi has been defeated at this point.  That and ensuring that no user input is used to build any query strings dynamically.  Additionally at this point we have to consider making the calls to the database using TLS (Transfer Level Security).  I had an additional thought I have not seen implemented but I have wondered about.  The idea is my Services will request data from my database, but, how do I know all those requests came from the Services?  What is they were spoofed?  What if some savvy blackhat put a copy of my UI website on a thumb drive using WGET for the presentation layer and that site made a seemingly legit call back to my database?  I don’t know; could be paranoid, but these days…  So, the idea is to use something (HMAC) to make sure those requests are legit and then to route the other traffic to a honeypot database where I can monitor requests and try to track the traffic over time to find my little “helper.”

Down to the relational database layer…  Could be SQL Express, could be MariaDB (over MySQL).  Honestly, this doesn’t concern me because I wouldn’t choose to use many of the “bells and whistles” and I would choose to treat my database like a dumb trashcan for data that may blow up at any time.  It’s only value to me is that it’s cheap and fast, because if we’re successful, we’ll need more of them.  I’ve seen plenty of enterprise solutions use the most “pimped out” MS SQL servers they could have and they paid handsomely for it up front and down the road.  I prefer to let the programmers solve the hard problems and just use sharding to reduce the stress on a cheaper database.

Which brings me to Shard’ing.  I know Shard’ing scales better than Silo’ing, but I also know that the optimal sharding method requires some pretty insightful choices and a fast code layer to help the data calls get routed and bunched properly.  The example often given is by users alphabetically, but, I’m curious if there’s some more optimal way to choose that client shard’ing other than common sense.  Having studied MySpace and Amazon and others, this seems like a really painful road each company goes through and often takes a few tries to get just right.

So, at this point we have a basic architecture, but it’s missing, in my opinion some very key components.  A way to monitor everything and a way to get Sales/Marketing all those reports without screwing up my database traffic.  Oh, and giving the Security/Audit teams some toys would be nice.

web-arch-06

I’ve worked with Ops guys and I’ve learned they can be your best friends or they can really hate you because you give them nothing to work with.  I like Ops.  So, I want to try out a distributed monitoring tool that has its hooks in everything without compromising.  From what I’m reading, and what I’ve experienced, this just isn’t one of those areas that everyone thinks about.  Ironic to me how most devs can debate endlessly about OOP or MVC vs. MVVM, but few have an answer to “how do you measure the “better-ness” of your OOP solution?”  Sometimes they say, that’s another team’s responsibility…  Now that’s team work.  Anyway, numbers are how we measure, not religious devotion to decoupled systems and high minded PhD white papers from MS/Oracle.

So, the weak consensus boiled down to a couple paths:

  • Ganglia (for metrics) + Nagios (for alerts)
  • Sensu + Collectd + Graphite + Logstash
  • Splunk

Now, all that really feels like heavy Ops, but not enough security.  It’s good to know when servers are tanking and databases are hung, but I’d sure like to know when a friendly is helping me “test” my system by initiating a DDOS attack on Web Farm A or port scan on one of my service layers.  So, where do we plug in SNORT or some other traffic monitoring security app?

Finally, the reporting.  I don’t know the statistics, but I’m pretty sure a high percentage of any “Data Warehouse” project I’ve ever observed from the sidelines failed miserably…  They failed in different ways.  Usually, the original devs were too busy so they just create reporting straight off production databases.  That works long enough for them to get a new job and a couple years layer business users start complaining about load times when they fire off a historical report against a database.  Hey, how are they supposed to know?  It was fine with Scott wrote it two years ago…  No, no one has cleaned out the history or log files or rebuilt indexes or whatever…  So eventually some BI company hears the complaints and sells them a big DW package which has more nobs than a space station.  Oh you wanted consulting?  That’s cost prohibitive, but we can teach your Dev for 2 hours and they’ll have it…  Oh you’re good devs don’t have time/interest in DW?  Just give me your worst, laziest, most checked out dev…  Okay, long story short, but that’s what I run into when it comes to the sad, sad land of reporting.

Which is even sadder, because REPORTS are for EXECUTIVES much of the time.  This is precisely how IT departments get judged and perceived by their corporations executive sales and marketing leaders.  Okay, so, here’s my new thought to solving this much unseen problem in IT.

web-arch-07

You have a standalone SQL Enterprise Edition database just for reporting.  You setup a Quartz scheduler app to pull data every 2/4/6/24 hours from the prod databases, and transform it into quantitatively friendly tables for easy reporting.  Then you spend some cache and get Telerik Reporting with the responsive design so it works for mobile loaded up on a server and dishing those reports out.   I’m pretty sure this would take less time, despite costs, and satisfy more executives (who don’t want to come to the office to view a report), and really, outside of the data transformations, you could feasibly hand a Telerik solution to a B player on your team and it would still look like “magic rocket ships” to the leadership teams.  But, the data pulls…  have to be fast.  The new guy shouldn’t be handed Entity Framework with a blog on how to write LINQ and put in a corner.  This almost always results in high load times and absolutely unforgivable LINQ generated SQL.  I know, it’s not LINQs fault it’s smarter than the average dev, but, that’s the world we’re in.

 

This is a really fun thought experiment for me so I’m going to continue posts that begin building out each part to expose incorrect assumptions and show metrics where I can.

Posted in Uncategorized | Tagged , , , , , , , , | 2 Comments

Windows Azure Cloud Costs Analysis

If you’ve ever tried to figure out what Cloud Hosting would cost, you may have thought, “wow, that’s complex.”

I’m here to confirm that thought and maybe help out with the knowledge I gained from 4 days of back and forth emails with Microsoft’s Billing Support.  They’re very helpful, but I think they’re pricing model is probably a barrier for some Architects.  Regardless of complexity, it turned out to be a very good deal.

Azure Hosting if you have a MSDN License:
If you currently have a Visual Studio Premium with MSDN subscription, that may include $100 monetary credit each month. Therefore, as long as your usage is within $100, you would not be billed. If your usage exceeds $100, you would be charged as per standard rates.

Standard Website
You will be metered for compute hours as you are using reserved units for your website.
How many compute hours are there in a month?  930 compute hours.
Pricing = 930 x 0.06 = 55.8 USD.

WordPress Websites MySQL Database

Since I made a WordPress site, I discovered I got 1 Free 20MB MySQL database license.  Additional MySQL databases will be charged.

How much would I be charged for a second MySQL database?
Standard Rates: http://www.windowsazure.com/en-us/pricing/details/sql-database/

  • 0 to 100 MB         $4.99
  • 100 MB to 1 GB         $9.99
  • 1 GB to 10 GB         $9.99, for the first GB $3.99 for each additional GB
  • 10 GB to 50 GB         $45.96, for the first 10 GB $1.99 for each additional GB
  • 50 GB to 150 GB     $125.88, for the first 50 GB $0.99 for each additional GB

How much would a Microsoft SQL Server Database Cost (what are standard rates)?
When you set up a SQL Database in Azure you can choose:
DATABASE SIZE    DATABASE UNITS (EACH DU = $9.99)

  • 0 to 100 MB            0.5 DU
  • 100 MB to 1 GB        1 DU
  • 1 GB to 10 GB        1 DU for the first GB, 0.4 DU for each additional GB
  • 10 GB to 50 GB        4.6 DU for the first 10 GB, 0.2 DU for each additional GB
  • 50 GB to 150 GB        12.6 DU for the first 50 GB, 0.1 DU for each additional GB

These rates are pro rated.

Why does a “Windows Azure Website” show up under “Cloud Services”
For billing purposes only, Azure Web Sites (standard) uses Windows Azure Compute Hours meter (also used by Cloud Services). This is the reason why the website would show up as “Cloud Services” under the invoice/detailed usage report.

If I have 3 WordPress Websites, How can I minimize the costs?    
By default, if you create a Standard Website, there is a reserved unit created for that sub-region (ex: east-us).
Any additional websites that you create in the same sub-region, would be automatically linked to this reserved unit.
We will be billing only for the reserved unit and not for the number of websites.
Therefore, to save cost, you can create all three websites under the same sub-region.
You would be only for one reserved unit, i.e. 30 hours per day.

Well, hope that helps anyone out.

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

Dev Setup: Windows 7, IIS 7.5, SQL 2012, Visual Studio 2012, TFS

This entry attempts to cover, from start to finish, how one developer might setup a new development machine with Windows 7, IIS 7.5, SQL 2012, VS 2012, and Team Foundation Cloud Services.  I am publishing this because I found many snags along the way with no good answers to be found on the internet in a unified place.  SQL and .Net answers rarely seem to meet well.  Best practices often get chucked due to frustration.

Environment Assumptions:

Window 7

IIS 7.5

SQL 2012

Visual Studio 2012

Source Control: TFS Cloud Service

 

Install the Operating System first.

Do all the required windows updates.

 

Installing IIS

Control Panel -> Programs -> Programs & Features: Turn Windows Features On or Off

Expand Internet Information Services

Match this image

HRmonise3 Setup - IIS Install

 

Installing URL Rewrite (click the link and install)

(x86 – 32bit)

http://www.microsoft.com/web/gallery/install.aspx?appid=urlrewrite2

(x63bit)

http://www.microsoft.com/en-us/download/confirmation.aspx?id=7435

Do all the required windows updates.

 

Installing SQL 2012

Do not install Anaylsis Services or Reporting Services (unless you really work on a system with this…)

Choose Mixed Mode with a “sa” account password

Change the Default Location where the database/logs are stored (probably to you D drive)

Change SQL Server Service to be “automated”

After install, do all the required windows updates.

Restore a backup of SuperAwesomeCompanyWebsite database

 

 

Installing the Development Environment

Install Visual Studio 2012

Install Visual Studio Team Foundation Tools

Install SQL Server Data Tools

Do all the required windows updates.

 

CONFIGURING SuperAwesomeCompanyWebsite: Team Foundation Server (Cloud Service assumed to be setup)

In Visual Studio 2012

Open Team Foundations Browser

Connect to http://SuperAwesomeCompanyWebsite.visualstudio.com\DefaultCollection

Download the project to your computer (suggest a folder like “D:\TFS\ SuperAwesomeCompanyWebsite\”

 

CONFIGURING SuperAwesomeCompanyWebsite: IIS

Quick Explanation of IIS 7.5 and SuperAwesomeCompanyWebsite

Hierarchy in IIS = Website -> Application (one to many) -> Virtual Directory (one to many)

SuperAwesomeCompanyWebsite only has 1 application, and 1 virtual directory

 

Open IIS

Go to Default Website

Edit Bindings (right column): Add HTTP/HTTPS; remove all others (unless you really need that stuff…)

Right Click on Default Website -> Add an Application

Use the built in “DefaultAppPool”

Match the images provided:

HRmonise3 Setup - IIS HRmonise3 Basic Settings

 

HRmonise3 Setup - IIS HRmonise3 Advanced Settings

Click on SuperAwesomeCompanyWebsite -> Authorization Roles

HRmonise3 Setup - IIS HRmonise3 Authorization Rules

If you like, you may setup the Connection Strings here as well (will modify your web.config)

 

Informational:

Click on Application Pool: DefaultAppPool and examine the Advanced Settings

                Notice its “Identity” is ApplicationPoolIdentity (not Network Service, not Local Service, not custom thingy…)

Quick Explanation of the ApplicationPoolIdentity (new in IIS 7.5) and NetworkService (what you used to see a lot)

                                ApplicationPoolIdentity: In IIS 7.5, the default Identity for an Application Pool is ApplicationPoolIdentity. ApplicationPoolIdentity represents a Windows user account called “IIS APPPOOL\<AppPoolName>”, which is created when the Application Pool is created, where AppPoolName is the name of the Application Pool. The “IIS APPPOOL\<AppPoolName>” user is by default a member of the IIS_IUSRS group. So you need to grant write access to the IIS_IUSRS group

CONFIGURING SuperAwesomeCompanyWebsite: FOLDER STRUCTURE

Browse to root folder of your TFS project

Right click on the folder -> properties -> security: Edit: Add

Match the image provided

HRmonise3 Setup - Source Code Folder Permissions

 

CONFIGURING SuperAwesomeCompanyWebsite: SQL SERVER

In SQL Server SSMS

1. Create SQL Login

Open Security folder (on the same level as the Databases, Server Objects, etc. folders…not the security folder within each individual database)

Right click logins and select “New Login”

In the Login name field, type ‘IISSQL_Account’

Choose SQL Server authentication, set your password

Change Default Database to the Generic database

Click User Mapping

Grant db_datareader, db_datawriter on each database you want.

Leave default schema blank

2. Create SQL User on your SuperAwesomeCompanyWebsite database

Expand your database “SuperAwesomeCompanyWebsite_generic”

Expand: Security -> Users

Right Click Users: Add New User

user name: IISSQL_Account

login name: IISSQL_Account (from step 1)

default schema: leave blank

Membership: db_datareader, db_datawriter

3. Grant permissions to the new Login on your SuperAwesomeCompanyWebsite database

Run this SQL:

GRANT SELECT, EXECUTE, UPDATE, INSERT ON SCHEMA :: dbo TO [IISSQL_Account]

 

CONFIGURING SuperAwesomeCompanyWebsite: ASP.Net 4 Framework

Tell ASP.Net 4 Framework to take priority with Default website

Open Command Prompt with Administrative Rights

Browse to: C:\Windows\Microsoft.NET\Framework64\v4.0.30319

Run: aspnet_regiis.exe -i

 

Tell ASP.Net Web Service to start automatically

1) Start–> Administrative Tools –> Services

2) right click over the ASP.NET State Service and click “start”

*additionally you could set the service to automatic so that it will work after a reboot.

 

 

Final notes:

This document contains as many of the gotchas and annoying crap that isn’t well organized on the web as I could find.

Obviously you’ll run into your own peculiar problems and I’d be happy if you shared them with me (and their resolution).

Posted in Uncategorized | Tagged , , , , , , | Leave a comment