Do you have any thoughts on programs or stacks for learning web development?


A friend asked me this recently and I wanted to capture my response. I’m curious to look back on it in a few years to see what will have changed.


Assuming they want to work on the web: there’s a few primary categories of programming computers you need to know about:

  • Command Line
  • Scripting language (with a backend focus)
  • Database
  • HTML, CSS, and plain ol’ JS
  • Frontend framework
  • Version Control
  • Hosting

I do think it’s important to understand a little bit of everything above. Even if, say, you end up as a front-end engineer you’ll be happy if you know about databases. Likewise back-end engineers should understand how all those services they’re writing will be used to present that information to the end user.

Command Line

If I could go back in time I would take a whole class on the command line. The reality is that you’ll be using it every day and without understanding the basics a lot of other parts of coding will just not make sense. Plus, it’s useful to understand what you _can_ do on the command line since sometimes that’s the best place to do your work.

Oh, and learn just enough Vim to open a file, make a change, and save it.

Backend Scripting

Some sort of backend scripting language. JavaScript could be an answer! Python is another strong choice. It used to be PHP and still is if depending on your company. What you use here will probably change at some point in your career so don’t get too attached. The core idea here is that you can use this language to do “backend things” like changing data in the database, or processing files, etc.

Databases

A relational database of some sort. MySQL or PostgreSQL. If you get into this it goes deep, but to start with you just need to understand how to make some databases, tables, and data.

Front End

HTML, CSS, Javascript

You should have some foundation of how to put a page together using HTML, CSS, and JavaScript. All frontend frameworks are just fancy ways of using these basic tools. My soapbox rant is that we’ve lost sight of that in the web world. You need essentially nothing to get text on a page at a domain name!

Frontend Framework

In 2023 I would recommend React. In 2024 it will probably be something else. The concepts should, mostly, transfer easily to whatever the next thing is. Get used to pining for some feature of some other framework. Get used to conversations about frameworks that start with “I’ve heard good things about…”

Version Control

Git. You don’t need to start with version control, but as soon as you can you should start implementing it into your workflow. If you work with other people you HAVE to use version control otherwise you’ll end up losing work or reinventing a worse version of Git in dropbox (ask me how I know!)

Hosting

You can make all of this work on your local computer, but the magic is putting it somewhere else and seeing it work elsewhere. There’s a deep rabbit hole here. Pick a cheap host and figure out how to get it live. You do not need to dive into AWS/Azure/Cloud infrastructure right now. You will need to later on!

What about Bootcamps?

If they want to go to a bootcamp the answer to “Which boot camp is best?” is constantly shifting. I would ask very recent graduates (as in: graduated in the past year) if they would recommend it. I certainly wouldn’t go to a bootcamp that doesn’t have a money back guarantee or some sort of job placement scheme. Your goal with a bootcamp is to get enough knowledge to land your first job. Everything you do in that bootcamp should be oriented around that goal.

Conclusion

I’ve been making websites for years and years and there’s still so much left to learn and everything is changing constantly. Getting comfortable with learning (or just being confused) is a key part of the job. The reward is that when you get things working the feeling is amazing. There’s just a lot of times where it’s not and that’s okay!

Happy Holidays!

In lieu of physical cards this year I spent way too long putting together a digital version that has a few jokes that made us laugh:

2020 Salzman Holiday Card

I really do want you to have a happy last few weeks of the year. If you celebrate Christmas, Merry Christmas! And here’s to a happy new year 🙂

Nerd stuff

There’s a github repo if you really want to delve into it. There’s not a lot there that you can’t envision from what you’re seeing on the site. It’s not meant to be open sourced, it’s not meant to be more than what it is. Robin Sloan’s idea of apps being home cooked meals applies here. I honestly wouldn’t have put it on github normally. Github pages required it though.

The highlights are:

  • React. I don’t really need React for this. It’s overkill; however, it’s what I use at work so I reached for it. In the future I’ll probably use Preact.
  • Tailwind for styling. Again, we’re using this at work and I’m a huge fan. Once you get used to the syntax it’s fast to use and powerful.
  • Using github pages to handle serving the file. This is the first time I’ve used that and was surprised at how easy it was to get going.
  • The “Too Cool” shades is just a png the same size as the jpg family photo. It’s such a simple trick. When I was looking around for ways to approach this I ran across this method and was amused at how simple of an idea it is and how well it works.
  • Depending on your combination of buttons the fonts change and that delights me to no end. No one else will notice this. Sometimes you do things for you though.

Salzbaby.live Updater

Years ago when my daughter was born I put together a website/app thingy to help us keep people updated on the progress of her coming into the world. The idea was that it was a single page site that would display a single message at any one time. If you were curious how it was going, you could check the site and not bother us.

I wrote a quick iOS app to help me update it from the hospital. Lots of people liked it and followed along for the entire 40 hours or so it took for my daughter to enter the world. It cut way down on the number of “what’s happening?!” texts we got, although we still got a bunch wondering if the site was broken since it was taking so long.

I consider this an example of an app that is a, in Robin Sloan’s words, a “home-cooked meal”. An app that doesn’t need to be spun off into a SaaS or be open-sourced, or grow an audience. It’s just for the people it is for.

We knew for our son we wanted to do this again. And, because nothing can be easy, it turned out I needed to rewrite the app. Who knew that Swift would change in the intervening years?!

Components

iOS App

After being frustrated that the code I’d hacked together 5 years ago didn’t magically work with no bugs I ended up starting over from scratch using SwiftUI. Swift is better than it was 5 years ago, but still a frustrating language to jump into to “get something done real quick”. The documentation is bad.

Anyway, here’s the code for the view:

@State private var message: String = ""

var body: some View {
    VStack {
        Form {
            Text("Salzbaby.live Updater")
            TextField("Enter Update", text: $message)
                .frame(height: 100.0)
                .textFieldStyle(RoundedBorderTextFieldStyle()
            )
            Button(action: {
                self.POSTfunction(message: self.message)
            }) {
                Text("Submit")
                    .multilineTextAlignment(.center)
            }
        }
    }
}

And that ends up looking like this in the app:

Type in the box, tap “submit” and it fires off a POST request to a file on the server. The passcode is a hardcoded string that I have replaced below so you don’t “hack” me. The server checks for it over on its end.

Here’s the main function for the post request:

func POSTfunction(message: String) { 
        //Create dict and then convert to JSON
        var dict = Dictionary<String, String>()
        dict["passcode"] = "lolno"
        dict["message"] = message
        let data = try! JSONSerialization.data(withJSONObject: dict, options: []) 
        
        // https://stackoverflow.com/questions/32201926/post-json-request-in-swift
        HTTPPostJSON(url: "https://salzbaby.live/SECRETPHPFILE.php", data: data) { (err, result) in
            if(err != nil) {
                print(err!.localizedDescription)
                return
            }
            print(result ?? "")
        }
    }

The app is then sideloaded onto the devices I physically plug into my computer. This does not scale, but it does not need to.

Server

Aside from general NGINX and SSL setup, over on the server I have a PHP file with one job to process the POST request:

<?php
	# Get JSON as a string
	$json_str = file_get_contents('php://input');

	# Get as an object
	$json_obj = json_decode($json_str);

	if($json_obj->passcode == "lolno") {
		$fn = "SECRETTEXTFILE.txt"; 
		$file = fopen($fn, "w+"); 
		$size = filesize($fn); 

		fwrite($file, $json_obj->message); 

		fclose($file); 
	}
?>

PHP’s motto should be: “your server is already running it so why not abuse it?”

Shouldn’t this be saving to a database instead of a txt file? Yes. Absolutely.

Web Site

The site is a bespoke templating library that does server side rendering and delivers an HTML file to your browser that displays the text in the txt file in the middle of your screen:

<!DOCTYPE  HTML>
<html>
	<head>
		<title>Is the Salz Baby Here Yet?</title>
		<style>
			body {
				height:100%;
			}
			#answer {
				text-align:center;
				font-size:4em;
				position: fixed;
				top: 50%;
				left: 50%;
				transform: translate(-50%, -50%);
				-webkit-transform: translate(-50%, -50%);
			}

		</style>
	</head>
	<body>
		<!-- 
			Hi there, nerds! Here's what you're after:

			The text of the page update is being read from a plaintext file. I have a bespoke iOS app on my phone and am sending a POST request to the server to overwrite the file whenever I make a request. It's simple, it works, and, there's no history by design.

			I'll show you the code sometime! Just not now!
		//-->
		<script type="text/javascript">
		</script>


		<div id="answer">
			<?php
				$fn = "answer.txt"; 
				$file = fopen($fn, "r"); 

				$contents = fread($file, filesize($fn)); 

				fclose($file); 

				print $contents;
			?>
		</div>

	</body>
</html>

This file was more or less the same as it was before just with a more direct comment to the many nerds in my social network. This is me making good on the promise to show you how it all worked.

Conclusion

Programmers overcomplicate everything all the time. The hardest part of this for me was limiting the feature set to only the basics. Since this is never going to be used by anyone else I could afford to cut every corner there is. Heck, there’s no notification on whether or not the update went through. You go to the site to see it there. That’s extremely poor UX!

However, despite the length of the list of feature requests requests…it did its job admirably. Folks from all of our disparate social circles got to check in on us when they were thinking of us and get a glimpse into what was going on. And as people woke up on the 27th and saw the update we started to get trickled in congratulations from all over. The Workantile slack even started a thread to notify people whenever there was an update, which was heartwarming to watch.

I’m glad we did this and the site will show this very important message until the registration on the domain name lapses in about a year:

UPDATE

A friend emailed me after the birth to say that he’d asked the Internet Archive to archive the site when it updated. He said “When he’s older and you explain to him how you set up a website, tell him another nerd archived it.”

This was such a gift and I’m so thankful for my friend who did this:


http://web.archive.org/web/20200526145350/https://salzbaby.live/

http://web.archive.org/web/20200526182423/https://salzbaby.live/

http://web.archive.org/web/20200526193713/https://salzbaby.live/

http://web.archive.org/web/20200527033742/https://salzbaby.live/

http://web.archive.org/web/20200527120404/https://salzbaby.live/

Referencing directories in WordPress themes

I needed to enqueue a handful of scripts and stylesheets in a WordPress theme and ran into much confusion between which functions to use to return what paths from the parent theme vs. the child theme. To save myself later here is the breakdown of when to use what. Basic gist is that “stylesheet” will get the child theme. “template” will get the parent.

If you don’t have a child theme any of these will get the current theme.

Absolute Path

These functions get the absolute path on the server’s file system. Mostly useful for referencing other PHP files within your theme’s PHP files:

get_stylesheet_directory() – absolute path to the current theme. If you’re using a child theme it’ll get it. Documentation.

get_template_directory() – absolute path to the parent theme. Even if you’re using a child theme it’ll get the parent. Documentation.

Directory

These functions get the public URI for the theme directory. Useful when you’re trying to publicly display something on the site through your theme files:

get_stylesheet_directory_uri() – returns a properly formatted URI for the current theme. Use if you have a child theme that you want to return. Documentation.

get_template_directory_uri() – returns a properly formatted URI for the parent theme. Even if you’re using a child theme it’ll get the parent theme. Documentation.

From Hugo to WordPress

There are a million and one blog posts for “move from WordPress to Hugo!” out there. This is the opposite of that.

Hugo is good

I like Hugo a lot. If you need a static site generator I’d recommend it without reservation! Once I had it going it worked very well. I loved how fast Hugo could build the entire site and it required almost no server resources. Turns out, you don’t need that much raw computing power to serving static files to a handful of folks every day.

Posting though

But…posting was far more complex than I wanted it to be. In the end I was syncing a folder via dropbox, committing my code to a repo, pushing it to a repo on the server. From there a git hook would run to build the site and then move the files to a folder. Extremely cool and extremely obtuse. If I had just one computer it wouldn’t have been that big of a deal to continue on; however, because I’m a glutton for complication, I bounce between 5 different devices (3 macs, 2 iOS devices) within the course of a week.

Having a web interface to log into matches my mental model of “I am writing and publishing” much better. I also dearly appreciate having drag and drop image support as well as WYSIWYG formatting options. Hugo’s support for non-text media is lacking. That’s totally okay! But, I was finding that I was avoiding publishing certain kinds of posts because I was lacking easy tools to do so.

WordPress and Gutenberg

What finally pushed me to make this change was a recent experience building a new website for Workantile. During that redesign we relied on the Gutenberg editor for formatting our pages. I was impressed! You can create fairly complex pages quickly with blocks in a way that just wasn’t easily doable a few years ago without diving deep into theme files. We’re in a bit of a rough transition point going from the classic editor to Gutenberg (and more importantly the concept of Blocks). It’s getting better all the time though!

Problems

The move went pretty well for the most part. Some technical issues I ran into:

RSS Importer

The RSS importer that is built into wordpress needs some serious attention. The biggest issue is that it fails because it’s calling a deprecated PHP function. The fix for this for me was to comment out the offending line. This StackOverflow post was helpful:

php – Call to undefined function set_magic_quotes_runtime() – Stack Overflow

I just found line 72 in the importer and commented it out. I wouldn’t rely on this edit for an import where you didn’t control all of the content, or as an importer you run more than once. It’s a brittle fix to say the least. Here’s the error and stack trace:

An error of type E_ERROR was caused in line 72 of the file /var/www/beta.chrissalzman.com/wp-content/plugins/rss-importer/rss-importer.php. Error message: Uncaught Error: Call to undefined function set_magic_quotes_runtime() in /var/www/beta.chrissalzman.com/wp-content/plugins/rss-importer/rss-importer.php:72
Stack trace:
#0 /var/www/beta.chrissalzman.com/wp-content/plugins/rss-importer/rss-importer.php(178): RSS_Import->get_posts()
#1 /var/www/beta.chrissalzman.com/wp-content/plugins/rss-importer/rss-importer.php(204): RSS_Import->import()
#2 /var/www/beta.chrissalzman.com/wp-admin/admin.php(344): RSS_Import->dispatch()
#3 {main}
  thrown

Hugo’s RSS Implementation

And on the Hugo end of things: Hugo’s built-in RSS generator doesn’t include the full text of the posts. It also doesn’t include categories. I happen to be the sort of person who likes adding categories, so this was a problem! Here’s a gist of my RSS theme file that has the full text and categories:

Hugo RSS Implementation with Categories · GitHub

This looks for the categories set in the frontmatter of your markdown files. If there are none it skips it and nothing is added to the feed for that post.

RSS Redirect

I wanted to make sure anyone who subscribed to my old RSS feed continues to get the new one. To do so I set up a 301 redirect using this plugin:

Simple 301 Redirect

Keeping it around since it’s always nice to have this in a readily accessible place.

Images

Moving images over to the new site involved making sure they were uploaded appropriately to WordPress’ Media library. I toyed with the idea of doing one big import and going through and fixing individual images. Then I found this plugin that just did it for me. Installed it and then Edited all of my posts and resaved them. It ran through and copied everything over and updated the src tags for me:

Auto Upload Images – WordPress plugin | WordPress.org

Inserting images in Hugo was one of the reasons I wanted to get away from it so this was nice to have it just work. After I ran this I turned it off.

Conclusion

So far, so good. I’m happy with this and this post was fully written and edited in the WordPress editor. It’s nice to have “my website” compartmentalized to something I go to on the internet to edit, although I will miss how much posting a blog post made me look like I was attempting to hack the planet.

Some Thoughts on Tools

Here is what I’ve been learning about tools:

  • Become enamored with taking care of your tools not buying new ones.
  • Buy the cheapest tool you need for a job. If it breaks or fails its intended purpose then buy a more expensive replacement.
  • Completing a project from start to finish is the only way to see what tools you actually need. Planning is fraught with false assumptions.
  • Youtube tutorials are a useful fiction. Watch them for techniques and explanation yet understand that the moments they don’t show are where all the laborious and detailed work is happening.
  • Avoid forums where people argue about specifications and not real world results.
  • If you are scared of the next step, practice it at a smaller scale. If you are still wary, talk it through with a friend.
  • Modify your tools to suit your purposes.
  • Be generous with your tools. Especially those that spend most of their lives sitting on a shelf.

How I Spent My Friday Night, or Why Framerate is the Wrong Choice for Managing Time-based Animations, Actions, and Effects

Of the bugs I have written a lot the one that keeps biting me is improperly locking an animation, effect, or action to the framerate of the game rather than to an elapsed time on a clock. You can never depend on your game running at a specific and consistent framerate across every device it’ll get played on. This seems obvious, but is a very easy assumption to make when you’re in the thick of it. If a device slows down, or speeds up, suddenly that tuned animation that elapsed over the course of 60 frames might take 2 seconds to complete.

For a background animation this might be fine and this isn’t necessarily that bad. However, what if it’s a function that the user is relying on for either visual feedback, or to take their next action?

That’s bad.

The Dark Souls of This Bug

There was a bug in Dark Souls 2: Scholar of First Sin in which character equipment was breaking very fast on the PS4. Equipment in that game has a durability rating that goes down as you use it. If you go too long without repairing it it will break.

Turns out, they’d linked that decrementing of durability to the framerate. On the PS4 the game ran at a nice steady 60 frames per second and thereby was resulting in equipment breaking very fast.

Thankfully we have this lovely bit of honesty from the patch notes:

“Fixed issue whereby weapon durability was decreased drastically when used on enemy corpses, friendly characters, etc. (Especially apparent for users running the game at 60 fps as the durability decrease rate was linked to the frame rate).”

Source: IGN

Scope Creep Studios is Not a AAA Studio, but We Have Similar Problems

A common way that I’ve introduced these problems is by doing something like the following:

public IEnumerator moveItOverSome() {   
    while (gameObject.transform.localscale.x >= 0f) {
        gameObject.transform.localscale -= new Vector3(.1f, 0f, 0f);
        yield return null;
    }
}

Every frame the Coroutine is running you shrink the scale of the object a tenth of a unit until it’s below 0, which functionally means that it’s invisible. Please ignore the other four or five problems with that coroutine. I’m going for simple readability right now.

The issue with this is twofold:

  • If your framerate dips or spikes or whatever the movement isn’t going to work as expected
  • Coroutines sometimes don’t behave exactly how you want them to

Give me a specific example, please

We were running into an issue on our upcoming app, Night Lights: A Toddler Toy, where every once in a while the coroutine to shrink our object refused to add the Vector3 how we wanted it to be added. This would result in the coroutine running for seemingly forever as it slowly decremented the size of the object by .0000001f every frame, or it’d shrink part of the way and then get caught somewhere and looked to be about half the size it should have been or looked like a speck, or generally just wasn’t what we wanted.

After a lot of hours debugging—_a lot of hours_—we finally stopped and did what we should have done from the start: wrote the corotuine to shrink from it’s full size to 0 over the course of 1 second using a lerping function.

Lerping interpolates between two values based on a third value between 0 and 1. Roughly: how far along in this process am I? At the start: 0, at the end: 1. Unity includes a Lerp method that returns a Vector3.

For the coroutine then we need to note the values we want to go between and the start time and then every frame the coroutine runs we can determine how far along we are in the shrink.

To whit:

public IEnumerator shrink() {
    Vector3 startSize = new Vector3(2f, 2f, 1f);
    Vector3 endSize = new Vector3(0,0,1f);

    //When does all this start?
    float movementStart = Time.time;

    //While current time is less than when we started
    //Plus how long we want it to go for (1 second)
    while (Time.time <= movementStart + 1.0f) {

        //Calculate our new size using the current time (Time.time)
        Vector3 updatedSize = Vector3.Lerp (startSize, endSize, (Time.time - movementStart));

        //Update the gameObject’s size using the new Vector3
        gameObject.transform.localScale = updatedSize;
        yield return null;
    }
}

To the toddler both approaches—when they work—visually look similar. Our latter example is reliable though and thereby the better solution for our needs.

A Note About Toddlers Tastes for Bugs

We found in testing that toddlers often like it when the app breaks in strange ways, but they also tend to then want it to rebreak in those exact same ways again, which is hard to do.

If you’ve spent any time with toddlers that will not surprise you in the least.

What Did You Learn?

Tozier reminded me what’s really important in life when I subtweeted the very short version of this blog post:

“Imagine how much you’re _learning_ though!”

Here’s what we learned:

When it comes to determining how something happens over time in your app or game you need to tie that action to some sort of timer. That timer could be the framerate, system time, game time, a clock in your basement that you have a webcam pointed to, or something else. Unless there’s a strong reason to do so one should reference a clock that isn’t tied to system performance.

Jekyll to Hugo Along with an Updated Deployment Script

I moved my blog from Jekyll to Hugo because I can’t leave well enough alone.

Jekyll is wonderful, it’s also slower than it should be for building a site. Not like “go get some coffee” slow, but building felt sluggish. I also needed to reconfigure my workflow for publishing anyway. My earlier workflow involved running Jekyll client-side, keeping the resulting _site folder in version control and deploying that.

Trust me, it made a lot of sense when I was setting it up. However, it quickly started to show its fragility.

My primary goal was to have the server generate the site itself so I needed to start from scratch anyway. This seemed like a good opportunity to try Hugo. I also wasn’t running ruby on my webserver yet and, frankly, installing Hugo was a lot easier than the full ruby setup:

sudo apt install hugo

And that was about it for setup on the server.

Hugo and Jekyll both use markdown files to generate the site and their frontmatters are similar enough that the transition to Hugo was incredibly simple.

Fast forward through a lot of futzing with config files, themes, and testing builds.

I eventually landed on this: I keep the project in version control and .gitignore the exported site from the repo. The site can still be built locally if needed with hugo serve but when the project is commited it ignores the exported site. After the commit is pushed to the server it runs the following post-recieve hook:

#!/bin/sh
#Checkout the repo to a temporary build folder
git --work-tree=/var/repos/hugoblog/tmpbuild/ --git-dir=/var/repos/hugoblog/ checkout -f
#Build the site
hugo -s /var/repos/hugoblog/tmpbuild/
echo "\n\nBlog has been built. Moving it to chrissalzman.com/blog\n\n"
#Then move the public folder to the right place
rsync -r /var/repos/hugoblog/tmpbuild/public/ /var/www/chrissalzman.com/blog

Nothing terribly fancy and this will likely get modified into the future. It checkouts the project to a folder, runs Hugo, tells me it worked, then moves it over with rsync.

Note

I did run into a theme issue with git that stumped me for a bit. The theme was cloned in using git. Since it had its own git repo it wasn’t getting tracked by the project’s git repo. After pushing it to the server hugo was generating a blank site since the theme was blank.

My fix was to copy/paste the theme into a differently named folder, update config.toml to that theme folder, and then add another commit. After that it worked. I’m sure this is the sort of thing that could be solved leveraging git in some way I don’t quite understand, but this was easier for me. I also anticipate designing my own theme.

Simple Deployment with Git

The gist of this is that we’re taking a local folder of a project, pushing it to a bare repo on the server, then the server runs a post-receive hook to check out the latest commit to a live folder on a server.

What follows are my notes for how to make this happen using the example of how my blog is deployed:

To start with you should create the bare git repo on the server. I’d suggest keeping this out of the folder you deliver your sites from for security reasons. You do you though. It just needs to be accessible as a remote. Mine is in /var/repos/blog:

git init --bare

Once it’s created you need to navigate to the /hooks folder of the repo. Then you’ll add a file for post-receive. This runs after the commit is completed. You can read all about git hooks here. They’re quite useful.

The post-receive hook

This is a potentially destructive act in that it resets the working directory to your latest commit. If you ever modify a file on the server the next commit will blow it away. That’s okay though because you’d never modify a file on the server outside of version control, right?

Also note that if you create a file inside of the folder on the server it will still exist later on. Git isn’t going to delete it unless it’s removed as part of the commit.

Don’t muck around in that folder is what I’m saying.

Here’s that post-receive script:

#!/bin/sh
git --work-tree=/var/www/chrissalzman.com/blog/ --git-dir=/var/repos/blog/ checkout -f
echo “Your blog has been updated at chrissalzman.com/blog/”

The meat of this is running a git checkout command that explicitly uses a “work tree” and a “git dir”. Generally, you don’t need to set these for a checkout if you’re in the folder that is associated with the repo you’re working with. This is amazing flexibility, albeit sort of confusing until you need it. Stackoverflow is littered with questions about why these flags even exist. It’s also important to note that you should set up the “work-tree” before you leave the server. This script will not create it for you. Making the script do so is left as an exercise for the reader.

The -f (—force) flag does the following:

When switching branches, proceed even if the index or the working tree differs from HEAD. This is used to throw away local changes. When checking out paths from the index, do not fail upon unmerged entries; instead, unmerged entries are ignored.

The echo’d line gets sent back to the terminal you’re pushing from. I’m using it to remind myself of what this all does under the assumption that I’ll forget what I did later on. Remember: your future self thinks you are dumb. Explain your work.

You’ll likely also need to modify the permissions on the file to run. For me this was sufficient:

chmod 775 post-receive

Then back on your local machine initialize your git repo in the folder you want to use:

git init 

And we’ll add in a remote located at the bare repo. For me it was this:

git remote add live ssh://user@domain.com/var/repos/blog/

This use case is making the project “live” so I’m calling the remote “live”. You may call it what you’d like. Go ahead and commit locally and then push it to live:

git push live master

If all goes well what should happen is that you push master to “live”, then the post-receive hook runs and it checks it out to the live directory. And you’re deployed!

Last step is running to the server to see that, yes, the blog post really did show up.