Signing git commits
October 03, 2022If you’re not already, I recommend signing your git commits.
Why should you sign your git commits?
If a commit isn’t signed, there’s no guarantee the author name you see is the actual author of the commit. You can forge commits 😳. You might not think this is a big deal, especially if you’re working on closed source, but if you’re working on anything open-source then it’s important.
How to setup on macOS
Install gpg tools
Download and install GPG Suite
Generate a GPG key
Once GPG Suite is installed, generate a new GPG key:
gpg --full-generate-key Follow the prompts:
- Select RSA and RSA (default)
- Choose key size of 4096 bits
- Set the key to not expire (or set an expiration date)
- Enter your name and email (use the same email as your git config)
- Set a secure passphrase
List your GPG keys
To see your newly created key:
gpg --list-secret-keys --keyid-format=long You’ll see output like:
sec rsa4096/3AA5C34371567BD2 2022-10-03 [SC]
1234567890ABCDEF1234567890ABCDEF12345678
uid [ultimate] Your Name <your.email@example.com>
ssb rsa4096/4BB6D45482678CE3 2022-10-03 [E] Copy the GPG key ID (the part after rsa4096/, e.g., 3AA5C34371567BD2)
Configure git to use your GPG key
git config --global user.signingkey 3AA5C34371567BD2
git config --global commit.gpgsign true Export your GPG public key
To add your GPG key to GitHub, GitLab, or other platforms:
gpg --armor --export 3AA5C34371567BD2 Copy the output (including -----BEGIN PGP PUBLIC KEY BLOCK----- and -----END PGP PUBLIC KEY BLOCK-----)
Add your GPG key to GitHub
- Go to GitHub Settings → SSH and GPG keys
- Click “New GPG key”
- Paste your public key
- Click “Add GPG key”
Verify it’s working
Make a commit and verify the signature:
git commit -m "Test signed commit"
git log --show-signature -1 You should see “Good signature” in the output.
Troubleshooting
If you encounter an error like “gpg failed to sign the data”:
export GPG_TTY=$(tty) Add this to your ~/.zshrc or ~/.bash_profile to make it permanent:
echo 'export GPG_TTY=$(tty)' >> ~/.zshrcRun commands over SSH!
January 08, 2017There are times where I need to run a few command on a server. In the past, I would SSH into the server and I would start making changes. But this is not ideal; since this it’s not repeatable. If you’ve run into this problem, I have an alternative! You can run commands over SSH! The script is executed on your machine but then the commands are run on the remote machine.
Please be aware that there are better tools – Ansible, Terraform, CloudFormation, etc. But those are heavy tools.
In this example, I have some configuration files that need to be copied to the remote system, and then I need to execute a couple commands. I also wanted the script to be checked into version control.
The first half of the script copies files, using scp, from my local machine to
the remote machine. The second half of the script (starting on line 12) runs commands from the
remote machine. The files are moved to the correct locations and Nginx and Haproxy are restarted.
#!/bin/bash
PEM=id_rsa.pub
HOST=ec2-54-234-130-49.compute-1.amazonaws.com
scp -i ~/.ssh/$PEM ./surrogate_pop.conf ubuntu@$HOST:/tmp
scp -i ~/.ssh/$PEM ./haproxy.cfg ubuntu@$HOST:/tmp
scp -i ~/.ssh/$PEM ./traffic_cop.lua ubuntu@$HOST:/tmp
scp -i ~/.ssh/$PEM ./allowed_domains.lua ubuntu@$HOST:/tmp
## These are executed on the remote host
ssh -i ~/.ssh/$PEM ubuntu@$HOST 'bash -s' <<EOF
sudo mv /tmp/traffic_cop.lua /usr/share/nginx/traffic_cop.lua
sudo mv /tmp/allowed_domains.lua /usr/share/nginx/allowed_domains.lua
sudo mv /tmp/surrogate_pop.conf /etc/nginx/sites-enabled/surrogate_pop.conf
sudo service nginx restart
sudo mv /tmp/haproxy.cfg /etc/haproxy/haproxy.cfg
sudo service haproxy restart
EOFCloudformation ApiGateway and Lambda
August 25, 2016Recently, I’ve been excited by serverless technology. I began using the serverless framework for code boilerplate and deployment. After some time using the framework, I began feeling pain. Serverless is an excellent project, but it’s moving very fast. For example, The framework uses cloudformation for resource dependencies such as dynamoDb, ApiGateway, roles and permissions (to name a few). Cloudformation is also moving very fast. Support for ApiGateway was added to cloudformation on April 18th, 2016. As new features are added to cloudformation, you’ll be stuck waiting for serverless to implement features for feature parity. I’ve started using cloudformation direclty and relying on bash scripts for deployment. I’m quite happy with the results.
Cloudformation stack
Once we have a cloudformation template, the AWS cli provides us with everthing we need. Using the AWS CLI we can create the stack like so.
$ aws cloudformation create-stack
--stack-name hello-world
--template-body file://./cloudformation.json
--capabilities CAPABILITY_IAM &&
aws cloudformation wait stack-create-complete
--stack-name hello-world The first command fires off an async create request to AWS. The second command tells our shell to wait for stack creation to complete.
After that’s complete, we’ll have created a few resources in AWS. =) Next, we’ll need a way to deploy.
Deployment
We have a few tasks for a complete deployment. We should seperate out the lambda depoyment and the ApiGateway deployment, but in this case I did not.
- Update Lambda Code - Install any dependencies, zip our code, and upload it.
- Publish a Version - From the latest version, It will tag a copy.
- Update the alias - Our lambda is pointed to an alias. This will point the lambda to our new version.
- deploy ApiGateway - Any changes we make to ApiGateway requires a deploy.
The script takes two args. The api-gateway-id and the function-name.
$ ./deploy.sh abc123 hello-word #!/bin/bash
apiId=$1
functionName=$2
profile=personal
YELLOW='\033[0;33m'
WHITE='\033[0m' # No Color
function zipLambda {
say "Zipping files." &&
rm -rf target &&
mkdir -p target &&
cp -r *.js package.json target/ &&
pushd target &&
npm install --production &&
zip -r "${functionName}.zip" . &&
popd
}
function say {
printf "\n${YELLOW} $@ ${WHITE}\n"
}
function updateLambdaCode {
say "Uploading new lambda code." &&
aws lambda update-function-code --function-name $functionName --zip-file "fileb://target/${functionName}.zip" --profile $profile
}
function publishVersion {
say "Publishing a new version." &&
aws lambda publish-version --function-name $functionName --profile $profile
}
function updateAlias {
version=$(aws lambda list-versions-by-function --function-name $functionName --profile personal | grep Version | tail -n 1 | cut -d '"' -f 4) &&
say "Updating the alias to version ${version}." &&
aws lambda update-alias --function-name $functionName --function-version $version --name prod --profile $profile
}
function deployApiGatway {
say "Deploying to Api Gateway." &&
aws apigateway create-deployment --rest-api-id $apiId --stage-name v1 --profile $profile
}
printf "\n🚀🚀🚀 SHIP IT!!! 🚀🚀🚀 \n\n"
zipLambda &&
updateLambdaCode &&
publishVersion &&
updateAlias &&
deployApiGatwayActiveModel::Model
August 14, 2015Rails 4 brought us ActiveModel::Model. It provides a light weight interface that’s similar to an ActiveRecord::Base model.
for example, I can create a Person class like so.
class Person
include ActiveModel::Model
attr_accessor :name, :age
validates :name, true
def save
## Do cool stuff here...
end
end Loading development environment (Rails 4.2.1)
irb(main):001:0> p = Person.new age: 21
=> #<Person:0x007fc6367193b0 @age=21>
irb(main):002:0> p.valid?
=> false
irb(main):003:0> p.errors
=> #<ActiveModel::Errors:0x007fc638000a68 @base=#<Person:0x007fc6367193b0 @age=21, @validation_context=nil, @errors=#<ActiveModel::Errors:0x007fc638000a68 ...>>, @messages={:name=>["can't be blank"]}>
irb(main):004:0> This is great for instances where you don’t need a full database backed Active Record model. I’ve used them for form objects and in controllers where I have complex logic.
You can think of these as higher level abstractions above your ActiveRecord classes. Also, be conscious of the dependancy direction. An ActiveModel model can depend on an ActiveRecord model but your ActiveRecord models shouldn’t depend on an ActiveModel model.
Here’s a more involved example.
Lets say I have 2 ActiveRecord classes Org and User
class Org < ActiveRecord::Base
validates :name, presence: true
end class User < ActiveRecord::Base
validates :first, :last, presence: true
end Now I’ll create an ActiveModel model (non database)
Notice the validates_each method… Its going to check each of the ActiveRecord objects and let them raise up any errors to the Signup class.
class Signup
include ActiveModel::Model
attr_accessor :first, :last, :name
validates_each :user, :org do |record, attr, value|
unless value.valid?
value.errors.each { |k,v| record.errors.add(k.to_sym, v) }
end
end
## must return boolean
def save
if valid?
org.save && user.save
else
false
end
end
private
def org
@org ||= Org.new(name: name)
end
def user
@user ||= User.new(first: first, last: last)
end
end Awesome right!!
So why do all this? Well, the single responsibilty states that every class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. ActiveRecord is responsible for persistence to the database. This will keep our classes with a narrow focus and allow us to refactor and create more use cases in the future. I think it’s a win. I find this strategy is generally good for one directional workflows such as signup or in a shopping app cart checkout.
rails configuration
March 14, 2015Occasionally I’ll see things like this in a code base.
def api_host
if Rails.production?
"http://prod.fake.api.url"
else
"http://stag.fake.api.url"
end
end I try to avoid writting methods like this. Rails provides a nice way to set environment specific variables.
http://guides.rubyonrails.org/configuring.html#custom-configuration
config/environments/staging.rb
config.api_host = "http://stag.fake.api.url" config/environments/production.rb
config.api_host = "http://prod.fake.api.url" So now you can refactor the method to this.
def api_host
Rails.configuration.api_host
end