Typora Git Integration



As much as I love all the different diagram formats/tools, I think there should be a certain caution to not over-integrate these tools: I believe the philosophy of typora is to provide a very extensive format for writing documents, while keeping the readability of markdown: if you would open the typora.md file in any text editor, everything should still be readable. Typora gives you a seamless experience as both a reader and a writer. It removes the preview window, mode switcher, syntax symbols of markdown source code, and all other unnecessary distractions. Instead, it provides a real live preview feature to help you concentrate on the content itself.

  1. Typora Github Integration
  2. Typora Git Integration

I never had a unified place to store notes earlier. I usually just remembered important things and wrote down the other things anywhere I liked. But after a year in college I could not keep on remembering everything. I should make it clear early on that I do not take extensive notes. I only jot down things that I find important like thoughts, quotes, project ideas, some literature etc. I had always looked for a simple note taking solution. My ideal note taking solution required the following:

  • Easy and quick to store notes (The longer it takes to note something down the less motivated I am to note it down)

  • Should be backed up (Its important! you don't want to loose that groundbreaking idea in a device failure)

  • Service Independent/Personal (I don't trust them, I will be limited by what the service has to offer)

  • Flexibility (I should be able to store everything and anything and organize it to my liking)

  • Supports multiple devices (Using a single device for note taking is just plain stupid)

Backstory

This is a backstory covering ll the apps I used before turning to Github. Feel free to skip this section.

So, I have tried to find an ideal service with the above features for a long long time. I used to use Evernote a long time ago. In fact It might have been my first note taking app. But that was before I switched to Linux. In linux the scenario becomes a lot tougher. The services available always seem to lack something or another. Either they do not sync, are limited only to linux, or are proprietary. A service that I really liked was Simple Note. It was a very minimal note taking app from the creators of wordpress and had seamless sync between devices. I actually used it to store notes for some time rather than just testing it and hating and moving on.

But like every thing in this world, it was imperfect. It was very limited in the type of notes it could store, the management of notes was very primitive. And, simple note didn't use great encryption. At that time, I didn't need a note taking service desperately and hence I slowly stopped using Simple Note. Simple note's desktop client was based on electron.

Later on, with the improvements in electron, several note taking apps emerged in the wild. I really loved the ideas behind them, Markdown support, minimal design, live rendering to get a WYSIWYG like feel. But the catch was that there was no organization or sync. You just edited a markdown file and saved it to your device. I then started using my text editor (Atom at that time) as my note editor just because one electron was already too much for an old PC. I saved them in the documents folder. No sync no search nothing. (I did not know linux search from command-line because I was used GUI most of the time)

A few note apps designed specifically for programmers like Larvena, Never got it running, arbicotine, Really liked this one, plus it was open source but developers responded slowly to issues, Boostnote, A very full featured note taking application which was simply overwhelming to my tiny brain, Tag Sapces, What the hell was this? tags for all your files, like seriously? All these applications felt like using a chainsaw to open a can of coke. They aimed to be too ambitious but lacked the basics.

On the Android landscape, things weren't any better. The thing with cross-platform applications is that most often you end up with an awesome application on one platform and a barely usable one on others. Very few applications are actually good on all the platforms that they support. Back then, I used to download and install gazillions of applications and I can say that I have tried my fair share of note applications for android. I didn't use Evernote on android because I started using android a long time after I left Evernote for desktop.

I did encounter a few great applications for android though. The first one was Monospace. It was a minimal markdown note editor. It did a things beautifully, It was unobtrusive and used tags for organization. However It did not have any sync and since I used mobile as a secondary device, it was a deal breaker. I also used apps similar to Monospace, such as JotterPad, Writer+ etc But they were worse than Monospace and I left them in few hours of usage.

If a note editing app stood as a great example of design, It would be Material Notes. It was such a beauty to use. The notes were colored and the usage of monospace font was just icing on the cake. But this app didn't do anything else. I used it only to store word meanings while I read the works of Robin Hobb. It was a beautiful app but not a great option for general note taking.

An app that was almost perfect was Notebook by Zoho. It was a great experience to use. Notes were arranged into notebooks, several media attachments could be made, and the editing, though not markdown, was quite good. It offered sync to servers of Zoho and also had a desktop web version. However, It just felt a little off. The app seemed that it was a bet project which lacked polish and the web version was just OK, the web-clipper was terrible.

I also used iAWriter. The greatest plus was the ability to sync to Google drive. But since Google drive does not have a sync app for Linux and the web version of Drive displays markdown as a bunch of unprocessed text just irritates the hell out of me. I do like the swipe navigation, minimal look, night mode with absolute blacks and export options of iAWriter. But a weird bug related to switching to different application makes you loose the note that you just typed for last half hour just made me feel helpless. The same happened when you switched different device orientations. With such a critical flaw, I simply decided to move on.

I moved on again. And I finally have setup a workflow for storing notes that I can see myself use in the near future. Although, like always, I will switch to the next hot thing :wink:.

The setup

I use Typora to write notes on Laptop. It is a brilliant markdown editor and one of the primary reasons I prefer the current setup. Typora is built on electron but since I do not use an ancient PC anymore, I do not need to worry about electron apps taking too much RAM on my laptop. Typora is currently in beta and is available for free. Similar text editors exist and you can use those as well. But having tried a few, I find typora very polished and reliable. I use it even to write my blog posts. In fact, this very blog post was written on Typora.

It has all the basics right. A minimal interface, themeable, relatively fast, different edit modes, syntax highlighting for code bocks, emojis, math support with latex syntax, shortcuts for markdown syntaxs, and inline display of image urls. What more can you ask for from a text editor? Git integration maybe :thinking:. And I even made a theme for Typora to match my system.

On the Android side, I have gone completely sadistic. I use Keep to store the quick thoughts that I come across and since keep is very usable and quick, I do not need a completely overpowered app for android.

Sync

'But they are two completely different applications, How do you sync them?'

Simple answer, Manually. I do not need to write down everything I usually just put small hints and important points on Keep and then copy them over to my desktop editor by using Keep's fairly decent web interface. Then I finally edit and store the good version if the note. It is a hard thing, and I do not have my notes as editable on mobile but it is an acceptable compromise to make, At least for me.

Now comes the real magic. Sync from desktop to Github. Unless you have been living under a rock you may know that Github recently announced free unlimited repos with at most three collaborators. This means that although you can't move your exponentially increasing startup to GitHub for free, you can move personal stuff If you weren't a student developer already. With the announcement of free private repos, you can reliably store your data on github just like all your code and GitHub is good enough to provide a decent web experience while reading markdown files.

But the hard part is to sync the local repo. If you have ever used git, you would know that to actually save your work, You have to Save the file -> Add it to staging -> Write a decent Commit message -> Push it to github -> Provide password if using HTTP. This whole chore is enough to drive anyone crazy. But there are ways to avoid this. And here I present my solution:

I use a bash script to automatically check for any changes to files, then add them to staging, commit them using a somewhat descriptive message, then pull any changes on github, finally push to github. I have also configured cache credential store for git that saves the github login passwords so that I do not have to enter it manually. Then finally, I have added a cronjob for every 15 minutes to run the sync script so It updates my notes to github automagically.

Here is the bash script:

The beauty of this script is the git status --porcelain command. It outputs a concise message about all the files that were modified. It can be used for both checking that if there was any change in the folder and also as a commit message.

With the sync setup, I had to copy all the notes from previous services cause why not. It was just a matter of copy and paste. I am now left to my imagination to fill the repo with future notes.

Happy Note taking

Historically, I’ve been the PHP kind of blogger. I basically learned the programming language by coding a sophisticated guestbook component for my various websites. Dynamic blog engine followed (and is still running!), but then I finally realized that it’s not the right time to write yet another MVC-CMS system and switched to WordPress. It works just fine, we use it at Microsoft and also my Czech dev-blog is powered by WordPress.

This post is about the next step, an evolution, which is a statically generated site. Let’s dig into it!

tl;dr

I’m using Typora, Hugo, Git and Visual Studio Team Services to author posts and publish them automatically through continuous integration pipeline.

Motivation

One day I realized that I don’t need the blog to by dynamic, to be generated from a database every time someone wants to read an article. I was posting only once a month, so there weren’t many changes anyway.

Therefore, these were my requirements:

  • Local & offline editing.
  • Minimal in all aspects - disk space, design etc.
  • Markdown as the authoring format (I got used to it way too much in the recent years).
  • Backed by Git, so that It’s possible to track changes and place it to a cloud source-control system.
  • Automatically published when done editing.
  • Transparent - content fully accessible in simple formats, no database “obfuscation”.

Because of these reasons I chose a handy generator called Hugo, which builds the whole site from markdown files and HTML templates. But don’t get me wrong - it’s not THAT simplistic. It’s actually capable of many things - check their site and go through some of their tutorials to learn more about Hugo itself.

Implementation

Editor

Blogging is writing. And for me the main tool to write markdown is Typora, Electron-powered cross-platform WYSIWYG editor.

Typora works on Windows, Mac and Linux and is quite minimalistic. It allows you to edit the markdown code directly, if you so desire, but you don’t have to. There are different visual themes available (I prefer GitHub) and once you learn a few keyboard shortcuts, you will get extremely productive. I mostly use:

  • Ctrl+B, Ctrl+I for Bold and Italic (obviously),
  • Ctrl+Shift+I to insert and image,
  • Ctrl+K to create a hyperlink (and if there’s a URL in clipboard, it will automatically fill it in!).

I also usually configure the root folder of images, as a parameter in every article’s front-matter:

This parameter is used by Typora only and allows me to write image paths in the final form (that is /images/<article>/<filename>) and still see them in the editor.

Local Authoring

When I’m writing a post or making changes to the site, I always have Hugo running in server mode, so that it automatically picks up any changes and updates the browser window.

Then I navigate to http://localhost:1313 and see how the post will look like (it’s like preview in WordPress, but live after each save).

Git Locally

If I wanted to continue editing the post on a different computer, I would commit & push it to Git and simply pull the repo elsewhere (or edit in the browser - we’ll get to it).

My whole blog is a Git repo. It has the following structure:

  • .git
  • Site
    • content
    • public
    • static
    • [… other Hugo stuff …]
    • config.toml
  • Utils
    • hugo.exe
  • .gitignore

The Site folder is fully under Hugo’s control. All posts and other markdown files are in the content subfolder and the static subfolder hosts all images. Hugo compiles all files into the public subfolder which is at the end deployed to the web server.

I decided to pack hugo.exe file together with the repo and placed it in the Utils folder so that the build process can simply take the tool directly from the repo and run it. Updating also means to just replace the EXE with new version.

[Update 01-2018]: This is no longer necessary as there’s an extension for VSTS which provides Hugo as a build step.

Finally, .gitignore consists of a single line:

The reason is simple - I use Hugo locally to preview and test articles before publishing. But for the live site I want Continuous Integration to generate content in the cloud on every change. I don’t want the locally generated content to overwrite what I have online.

Git in the Cloud

With the post written, site designed and everything built to static HTML files I could simply upload the public folder to FTP server and be done with it. That wasn’t enough for me, though. I wanted full Continuous Integration (CI) pipeline. Coming to the stage is Visual Studio Team Services (VSTS)!

VSTS offers free source control, automatic builds, release management, load tests and more. So each git push goes to my private Git repository and then jumps to the continuous integration machinery.

To store your source code in VSTS you have to create a Project and then go to Code and clone the repo. Or you can create the project, copy remote address and issue the git remote add command with an existing local repo.

Blog source code is up, let’s have a look at how it is actually built into a static website. It happens in the Build & Release part, where I’ve created a new Build Definition (from blank template) with three steps:

  1. In the first step, the build machine just get sources from the repository: branch = master and don’t clean (= false).

  2. Second step is building the site using Hugo (we have hugo.exe in the repo, remember?):

  3. Third step is publishing build result (i.e. the Sitepublic folder) and passing it to Release:

After these three steps, all posts are converted from markdown to HTML, merged with templates and along with images placed to the public folder. Let’s switch to Release and continue the publishing process.

It starts with creating a new release definition with an Empty process:

With one Environment:

And artifact as the latest version from project build:

This creates a simple pipeline: “Take publish output from this build and send it to this environment”. Next is turning on the continuous integration trigger so that the site is published every time there’s someting new (i.e. after git push).

Finally I had to specify HOW exactly was the new version published. That is done in the Tasks section of the release definition.

I clicked Add a task to the phase:

And chose FTP Upload from the Utility section:

This tasks required a few properties to be filled. So I specified my websites FTP address, username and password and source folder (remember? PublishOuput now contains the whole public folder).

Hint: If you can, create an FTP user dedicated just for this site with no access anywhere else. For security’s sake ;)

If you look closely at the picture, you can see two custom variables:

$(FTPUser) and $(FTPPassword)

These were set in the Variables section of the release definition and it’s a good practice to sort of “mask” credentials like that. You can specify that any variable is a secret - therefore not retrievable.

When the time comes for the post to leave my local computer and fly up to the cloud, I do a simple Git Push:

Online Authoring

Imagine you want to quickly edit a typo. Your computer is nowhere near and you’re not going to install Git and clone the repo to your parent’s computer, are you?

Typora Github Integration

Having the whole site’s content stored as source code in a repository has several side benefits. One of them is the ability to edit anything in the browser and push changes through the same pipeline that I use from my primary PC.

Typora Git Integration

VSTS even has syntax coulouring and preview for markdown.

FTP upload

Typora git integration model

And of course, it’s still just static HTML files, so if I had the desire to generate any piece of content locally or fiddle with files in poduction, I certainly could log in via FTP and browse the raw content of my site.

Summary

That’s about it. Proper configuration took some time and serveral trials with many errors (and is still evolving), but I’m now happy with the publishing process - from editing to seeing the article online.