What a view. . .

What a view. . .

Wednesday, December 30, 2015

Notes of an Aspiring Woodworker

"EngNotes" are my engineering notes.  This is my way of creating a digital version of an engineering notebook.  This particular entry is on Woodworking.  This is a living post, I will be updating it as I learn more.




Rules of the Road

  1. Always sand with the grain
    • Random orbital sanders help allow you to sand any direction due to the random nature
  2. Wood swells/expands cross/perpendicular to the grain
    • Boards tend to get wider, not longer
  3. In general, hardwoods are from deciduous trees and softwoods are from conifers
    • Hardwoods are not automatically denser than softwoods, although it is common
  4. Oil based polyurethane tends to yellow the finish, water based polyurethane doesn't
  5. Use natural brushes for oil-based finishes and synthetic brushes for latex, acrylic or water-based finishes. Rollers and rags can work for either type of finish.
  6. Always sticker lumber that has been finished to the final dimension until you get ready to use it so that air can circulate evenly around all sides of the boards.

Tips and Tricks

  1. Put painters tape on edges to help contain glue squeeze-out
  2. Use packing tape or wax paper on surfaces that may stick to squeeze-out (like clamps)
  3. Cauls are used to keep a surface straight while clamping in the other direction
  4. Use a tack cloth to clean and remove dust
    • Typically cheesecloth soaked in varnish to make it sticky
  5. Janka is the wood hardness scale.

Glues

  1. Traditional wood glue is PVA glue
    • Sets up relatively quickly, 5-10min.
    • Tightbond Type III takes twice as long to set as Typebond I and II
  2. Polyurethane glue
    • Gorilla glue is a common brand
    • Longer cure time than PVA glue
    • Expands as it dries
    • Requires moisture to cure
      • Can come from moisture in the air
      • To accelerate cure, wet the surfaces to be bonded
  3. Hide glue
    • From animal hides 
    • Water soluble
      • Don't use on things that need to be water resistant 

Finishes

  1. Oils
    • Drying oils cure in the presence of oxygen
      • Boiled Linseed Oil (BLO), Tung Oil
      • Boiled linseed cures much faster than tung because it has additives
      • These oils heat up when curing.  Old rags can spontaneously combust.
    • Non-drying oils do not dry in the presence of oxygen
      • Vegetable (peanut, olive), mineral oil
  2. Varnish is a very general term, consists of a resin dissolved in something
    • Polyurethane, alkyd, phenolic are common resins
    • Often varnish can be mixed with oils 
  3. Lacquer
    • Hardens through evaporation of solvent, not chemical reaction
    • Can be redissolved by a solvent
  4. Shellac is bug poop
    • Like lacquer, it hardens through evaporation of a solvent
    • Alcohol is a common solvent
    • Since alcohol is a solvent, its not recommended for surfaces like bar tops
    • Can be affected by heat
  5. Lacquer and shellac tend to be harder but more brittle than varnish
  6. Coats of shellac and lacquer "melt into" previous coats because application of a new coat  partially dissolves the existing coats
  7. Coats of varnish build distinct layers
  8. Rubbing out varnish is more difficult than lacquer or shellac because you have to make sure you don't rub through the final coat.  If you do and reach a coat below, you can end up with "halos" at the edges of the rub-through.
    • Don't use a thinned or wiping varnish for the final coat of varnish that is to be rubbed out

Lumber Yard Lingo

  1. All 1/2 measurements are rounded up
    • Almost always true, example: A 2x4 is 1.5" x 3.5"
  2. Rough wood is ordered and priced in board feet
  3. Thickness is measured in quarter inches
    • Example: 4/4 is 1", 8/4 is 2"
  4. Milling
    • Edges
      • Specified by the number of edges that are straight
      • Examples: SL1E (straight line 1 edge), SL2E (straight line 2 edges)
    • Faces
      • H/M (Hit and Miss) the board will be rough planed on both faces.  This only scrapes off the rough and high spots.  There will be low spots left.
      • S1S (Surfaced 1 Side) the board will be planed flat on one face
      • S2S (Surfaced 2 Sides) the board will be planed flat on both faces
      • S3S (Surfaced 3 Sides) essentially S2S + SL1E
      • S4S (Surfaced 4 Sides) S2S + SL2E. All 4 sides will be milled and planed true
  5. Grade refers to the expected yield (dimensional lumber) expected from rough stock
    • FAS (First and Seconds), highest grade. min size 6in x 8ft, yield is 83% and up
    • Sel (Select) this is virtually the same as FAS, but the min board size is lesser (4in x 6ft)
    • 1 Com (#1 Common) often called Cabinet Grade, min board size is 3-in x 4-ft, yield is 66% and up
    • 2 Com (#2 Common) often called Economy Grade
  6. Formula for calculating Board Feet: Length(in) x Width(in) x Thickness(in) / 144
    • When calculating board feet, anything under one inch is rounded up to 1 inch
      • A board that's 3/4 x 5 x 17 would be calculated as (1 x 5 x 17)/144
    • Over an inch is left as-is, 2-5/8 would be left as 2-5/8
condensed from Reddit post and comments

Sunday, March 29, 2015

EngNote - J/S Ratio

"EngNotes" are my engineering notes.  This is my way of creating a digital version of an engineering notebook.  This particular entry is on Jam to Signal Ratio.
The Jam to Signal Ratio (J/S) is a useful quantity when evaluating an EW scenario.  Below is a simplistic derivation of how to calculate the Jam to Signal Ratio.

Note that the isotropic antenna effective area is present twice in the jammer's equation and only once in the target's equation.  This is due to the fact that the jammer must "convert" the received power density into power before re-transmitting.


$ \begin{matrix}
P_{target.rx} & = & P_{radar}Gain_{radar.tx}(\frac{1}{4\pi Range^{2}})\sigma _{rcs}(\frac{1}{4\pi Range^{2}})(\frac{\lambda ^{2}}{4\pi})Gain_{radar.rx} \\
P_{jam.rx} & = & P_{radar}Gain_{radar.tx}(\frac{1}{4\pi Range^{2}})(\frac{\lambda ^{2}}{4\pi})Gain_{sys}(\frac{1}{4\pi Range^{2}})(\frac{\lambda ^{2}}{4\pi})Gain_{radar.rx} \\
\frac{Jam}{Signal} & = & \frac{P_{jam.rx}}{P_{target.rx}} \\
 & = & (\frac{\lambda ^{2}}{4\pi})Gain_{sys}(\frac{1}{\sigma _{rcs}}) \\
\end{matrix} $
Where:

$ (\frac{\lambda ^{2}}{4\pi}) = IsotropicAntenna_{EffectiveArea} $


EngNote - Polarization

"EngNotes" are my engineering notes.  This is my way of creating a digital version of an engineering notebook.  This particular entry is on Polarization.
Textbooks and online references often don't give a complete and practical discussion of how to calculate the impact of a wave's polarization on the magnitude and phase of a received signal.  As a result, I made a polarization toolbox for Matlab to serve as a quick reference.  There is a link to the toolbox in the Code section below.


Cross Polarization Discrimination for a Circular Antenna

$ XPD(dB) = 20log_{10}\frac{axRatio+1}{axRatio-1}$



Code

GitHub

Good References

[1] Polarization in Electromagnetic Systems, Warren Stutzman, 1992
The best practical resource on polarization I have found



EngNote - Linear Least Squares

"EngNotes" are my engineering notes.  This is my way of creating a digital version of an engineering notebook.  This particular entry is on Linear Least Squares.



I often find myself using linear least squares curve fitting to estimate a data set.  Below is a quick write-up of linear-least squares that includes a derivation AND some sample Matlab code.  After the write-up I've also included more advanced versions of linear least squares curve fitting - weighted, constrained, and regularized linear least squares.  Often times I have found these variants to be necessary for the problem at hand.  At the very end there are some links to reference material that is invaluable.


Linear Least Squares Write-Up

The Problem
Formulating an equation to describe how one variable depends on another given a data set (xi,yi) i = 1,2,3,...,n where xi is the independent variable and yi is the dependent variable is known as curve fitting. One quantitative way to assess the quality of an equation's fit to the data set is by finding the sum of the squares of the offset (or residual) between the data and the describing equation.

Illustration 1: Example Data Set and Fit Equation

Introduction
The quality of an equation's fit to a data set is important. Linear least squares is a common and straightforward method used to find a function that best fits a data set. Keep in mind that linear least squares assumes the data set being modeled can be represented as a linear combination of basis functions. The basis functions do not need to be linear in nature, but the combination of them must.

Consider trying to fit a polynomial to a data set. The method of linear least squares will attempt to find the coefficients ( a,b,c,... ) of the polynomial p = a + bx + cx2 + … that minimize the sum of the squares of the offsets:

$S=\sum\limits_{i=1}^{n}(a+bx_{i}+cx^{2}_{i}+...-y_{i})^{2}$

1, x, x2, … compose the basis functions of the fit. Notice that the basis functions are not confined to be linear functions. The coefficients ( a,b,c,... ) form a weighting for the linear combination of the basis functions.

Derivation
When S, the sum of the squares of the offsets (errors), is viewed as a function of the polynomial's coefficients ( a,b,c,... ), S is minimized when its gradient is 0. Solving for the gradient results in n gradient equations:

$\frac{\partial S}{\partial a}=0, \frac{\partial S}{\partial b}=0, \frac{\partial S}{\partial c}=0, ...$

By combining all of the gradient equations the coefficients ( a,b,c,... ) that minimize S can be solved for. Below is an example of the steps involved in fitting a first-order polynomial (a line) to a data set.

Example
Data points (xi,yi) generated by:

$y_{i} = x_{i} + noise$

First order polynomial to be fit to the data set:

$a + bx_{i}$

Equation for the sum of the squares of the error:

$S=\sum\limits_{i=1}^{n}(a+bx_{i}-y_{i})^{2}$

Solving for the two gradient equations:

$\frac{\partial S}{\partial a}=-2\sum\limits_{i=1}^{n}(y_{i}-(a+bx_{i}))=0$
$na+b\sum\limits_{i=1}^{n}x_{i} = \sum\limits_{i=1}^{n}y_{i}$
$\frac{\partial S}{\partial b}=-2\sum\limits_{i=1}^{n}(y_{i}-(a+bx_{i}))x_{i}=0$
$a\sum\limits_{i=1}^{n}x_{i}+b\sum\limits_{i=1}^{n}x_{i}^{2} = \sum\limits_{i=1}^{n}x_{i}y_{i}$

Converting the two equations into matrix form gives:

$ \begin{bmatrix} n & \sum\limits_{i=1}^{n}x_{i} \\
\sum\limits_{i=1}^{n}x_{i} & \sum\limits_{i=1}^{n}x_{i}^{2} \end{bmatrix}

\begin{bmatrix} a \\ b \end{bmatrix} = \begin{bmatrix} \sum\limits_{i=1}^{n}y_{i} \\ \sum\limits_{i=1}^{n}x_{i}y_{i} \end{bmatrix} $

Rearranging and solving for (a,b) yields:

$ \begin{bmatrix} a \\ b \end{bmatrix} =
\begin{bmatrix} n & \sum\limits_{i=1}^{n}x_{i} \\ \sum\limits_{i=1}^{n}x_{i} & \sum\limits_{i=1}^{n}x_{i}^{2} \end{bmatrix}^{-1}
\begin{bmatrix} \sum\limits_{i=1}^{n}y_{i} \\ \sum\limits_{i=1}^{n}x_{i}y_{i} \end{bmatrix} $

The general form of linear least squares is almost always given in the form of the following equation.  Reference [1] contains a detailed derivation.

$ a = (X^{T}X)^{-1}X^{T}y $


Examples

Linear Least Squares

$ a = (X^{T}X)^{-1}X^{T}y $

X contains basis functions in the columns of the matrix and different measurement states in the rows.  The vector 'a' is the coefficient vector being sought.  The vector 'y' is the measurement vector being fit.

% Linear least squares example
% ----------------------------

% Create the independent variable
x = [0:25].';

% Create the dependent variable (linear line of slope 1 in this example)
yTrue = x+5;

% Create observations of the dependent variable
% The rand call is adding noise to the observations
yMeas = x + rand(size(x)) + 5;

% Create the basis functions (model) that will be used to estimate the true 
% system. In this case, the basis functions are a constant function and a
% linear function (matches true system).
basisFunc = [ones(size(x)), x];

% Calculate the linear least squares fit
coefs = (basisFunc' * basisFunc)^-1 * basisFunc' * yMeas;
llsFit = basisFunc*coefs;

% Make a plot illustrating the fit
figure; plot(x,yMeas,'o','LineWidth',2);
hold all; plot(x,llsFit,'LineWidth',2);
grid on; legend('Measured Data','Predicted Fit');
text(1,25, ['Fit Equation:  '  num2str(coefs(2)) 'x + ' num2str(coefs(1))])
text(1,26.25, ['True Equation: y = x + 5']);


Weighted Linear Least Squares

$ a = (X^{T}WX)^{-1}X^{T}Wy $
$ W = diag(w) $

Weighted linear least squares allows one to assign less importance to the fit of particular measurement points.  A common reason to do this is if the statistics of some of the measurements you are trying to fit to are different from those of others.  An exaggerated case is given below for illustration.

Applying weights is a somewhat obvious extension.  Simply multiply the measurements and the basis function matrix by a diagonal matrix of the weight (importance) vector.

% Weighted linear least squares
% -----------------------------

% Create a dependent variable
x = [0:99].';
% Define a function to be measured/estimated (5*x)
% Add some noise to it (randn) to simulate a measurement process
y = 5*x + randn(size(x))*5;
basisFunc = [ ones(size(x)) x ];

% Calculate a plain-jane linear-least squares
coefs = (basisFunc'*basisFunc)^-1 * basisFunc' * y;

% Plot the results
figure;
subplot(3,1,1);
plot(x,y,'o');
hold all; plot(x,basisFunc*coefs,'LineWidth',2);
title('Linear Least Squares Fit With No Gross Measurement Error');
ylim([0 525]);

% Pick a measurement(s) and give it a gross error.  I strategicly picked
% two points to make it easier to see the impact.
y(100) = 1;
y(1) = 500;
% Calculate the plain-jane linear least squares with the gross error
coefs = (basisFunc'*basisFunc)^-1 * basisFunc' * y;

% Plot the results to illustrate how the gross error "pulls off" the fit
subplot(3,1,2);
plot(x,y,'o');
hold all; plot(x,basisFunc*coefs,'LineWidth',2);
title('Linear Least Squares Fit With A Gross Measurement Error');
ylim([0 525]);

% Create a weight vector - in this case the weight vector is relatively
% arbitrary to prove a point.  In general, determining the weight vector is
% the secret sauce of the weighted linear least squares.
weight = ones(size(y));
weight(100) = 0;
weight(1) = 0;
weight = diag(weight);

% Calculate the weighted linear least squares solution
coefs = (basisFunc'*weight*basisFunc)^-1 * basisFunc' * weight * y;

% Plot the weighted result to illustrate how the gross error no longer
% "pulls off" the fit
subplot(3,1,3);
plot(x,y,'o');
hold all; plot(x,basisFunc*coefs,'LineWidth',2);
title('Weighted Linear Least Squares Fit With A Gross Measurement Error');
ylim([0 525]);


Regularized Linear Least Squares

$ a = (X^{T}X + \mu I)^{-1}X^{T}y $
where $\mu$ is known as the regularization weight and $I$ is the identity matrix

If your basis functions don't model the process being fit, some are nearly dependent, or the number of basis functions is large, linear least squares can over-fit.  An over-fit solution can look good on paper but it can cause the fit to do "weird" things when evaluating a fit between measured points  (interpolation).  Said another way, it can increase a fit's sensitivity to input noise.  A common symptom of over-fitting is that the coefficients of the fit begin to grow rapidly.

One way around over-fitting is to use regularized linear least squares.  This technique adds a second objective to the least squares solution - minimizing the squared sum of the coefficients themselves.  The trick to this technique is in picking a regularization weight.  Too large of a weight and the fit favors shrinking the coefficients too much, resulting in a poor fit.  Too small of a weight and you wind up over-fitting.  Over-fitting and the balancing act of a regularization weight versus fit is illustrated by an L-curve.

Constrained Linear Least Squares

Often there are parts of the fit process I would like to control.  At one point in time I wanted the sum of the coefficients I was finding to sum to a specific value.  This can be easily accomplished by adding a faux value to all of the basis functions (adding an extra row to the X matrix) and a corresponding faux measurement to the measurement vector (adding an extra element to the y vector).

Keep in mind, this isn't a hard constraint but is part of the fitting process.  As a result, it isn't necessarily or exactly met.  You can increase the importance of the constraint over the fit by increasing the value of the faux values added.


% Constrained linear least squares
% -----------------------------

% Make up a measurement vector
meas = [-6 ; 0.7 ; 0.0];
% Define a regularization weight (not necessarily needed)
lambda = 0.00005;

basisFunc = [0.5 -3.0 0.0 ; 0.3 -1 0.0 ; 0.8 0.3 0.0 ; 0.71 2.3 0.0].';

for weightNdx = 0:99
  % Add the constraint, making the number larger increases the importance of 
  % the constraint
  meas(4) = weightNdx/10;
  basisFunc(4,:) = weightNdx/10;
  
  % Calculate the regularized linear least squares solution
  coefs = (basisFunc' * basisFunc + lambda .* eye(size(basisFunc,2)))^-1 * ...
    basisFunc' * meas;
  
  % Debug evaluation
  coefSize(weightNdx+1) = sum(coefs);
  
end

% Plot the sum of the coeficients to show it approaching one
figure; plot(linspace(0,9.9,100),coefSize,'LineWidth',2);
title('Sum of the Coefficients of the Fit');
ylabel('Sum of Coefficients');
xlabel('Weight Assigned to Sum');



Good References

[1] The Mathematical Derivation of Least Squares [Local Mirror]
http://isites.harvard.edu/fs/docs/icb.topic515975.files/OLSDerivation.pdf

[2] Least-Squares [Local Mirror]
http://stanford.edu/class/ee103/lectures/least-squares/least-squares_slides.pdf

[3] Regularized Least-Squares and Gauss-Newton Method [Local Mirror]
http://see.stanford.edu/materials/lsoeldsee263/07-ls-reg.pdf

[4] Constrained Linear Least Squares [Local Mirror]
http://people.duke.edu/~hpgavin/cee201/constrained-least-squares.pdf

[5] The L-curve and its use in thenumerical treatment of inverse problems [Local Mirror]
https://www.sintef.no/globalassets/project/evitameeting/2005/lcurve.pdf

[6] Choosing the Regularization Parameter [Local Mirror]
http://www2.compute.dtu.dk/~pcha/DIP/chap5.pdf