Dropbox sync for the blog

:  ~ 3 min read

Ok, so I went ahead and added a database and implemented some sort of sync between said database and my Dropbox folder. It's not perfect, but it works for me.

I created a Posts model and configured the DataMapper:

class Posts
# Yes, I know the general consensus is to use singulars for Models,
# but DataMapper creates/uses tables named 'application_[class_Name]' and 'application_posts makes more sense
# Update, 30 May 2014: Shouldn't have cared about the table name as much as the class name. Point learned.
  include DataMapper::Resource
  property :id, Serial
  DataMapper::Property::String.length(255)
  DataMapper::Property::Text.length(999999) # Just because :)
  property :title, Text
  property :body, Text
  property :date, String
  property :time, String
  property :modified, String
end
configure :production do
  require 'newrelic_rpm'
  # Heroku provides the 'DATABASE_URL' so you don't have to type it manually
  DataMapper::setup(:default, ENV['DATABASE_URL'])
  DataMapper.auto_upgrade!
end
configure :development do
  DataMapper::setup(:default, "postgres://roland@localhost/roland")
  DataMapper.auto_upgrade!
end

On with the Dropbox sync, attached to a custom URL, for which I created a simple Hazel rule to trigger when I add a new file to my posts folder:

get '/cmd.Dropbox.Sync' do
  session = DropboxSession.new(APP_KEY, APP_SECRET)
  session.set_access_token(AUTH_KEY, AUTH_SECRET)
  client = DropboxClient.new(session, ACCESS_TYPE)
  client_metadata = client.metadata('/Apps/Editorial/posts')['contents']
  client_metadata.each do |file|
    matches = file['path'].match(/\/(apps)\/(editorial)\/(posts)\/(\d{4})-(\d{2})-(\d{2})-(\d{4})-([\w\s\.\}\{\[\]_&@$:"';!=\?\+\*\-\)\(]+)\.md$/)
    date = matches[4].to_s + '-' + matches[5].to_s + '-' + matches[6].to_s
    time = matches[7].to_s
    title = matches[8].to_s
    file_mtime = file['client_mtime'].to_s
    # If we were to assign the body variable outside the if-else statement
    # to avoid repeating code, all the files would be downloaded, _greatly_ increasing
    # the time for this bit of code to complete. This way, we download only the required files.
    post = Posts.first(:title => title)
    # If the posts exists
    if post
    # Check to see if it was modified
      if post.modified != file_mtime
        body = client.get_file(file['path'])
        post.update(title: title, body: body, date: date, time: time, modified: file_mtime)
      end
    # Otherwise, create a new record
    else
      body = client.get_file(file['path'])
      Posts.create(title: title, body: body, date: date, time: time, modified: file_mtime)
    end
  end
  all_posts = Posts.all
  # Check if any post was deleted (highly unlikely)
  all_posts.each do |post|
    delete = true
    client_metadata.each do |file|
      title = file['path'].match(/\/(apps)\/(editorial)\/(posts)\/(\d{4})-(\d{2})-(\d{2})-(\d{4})-([\w\s\.\}\{\[\]_&@$:"';!=\?\+\*\-\)\(]+)\.md$/)[8].to_s
      delete = false if title == post.title
    end
    post.destroy if delete
  end
  redirect '/', 302
end

Then all I had left to do was to modify the code for parsing posts:

# This returns an array of structures containing all the posts' data
all_posts = repository(:default).adapter.select('SELECT * FROM application_posts')
# Convert the structures to hashes
all_posts.map! { |struc| struc.to_h}
# Sort the array by the datetime field of our hashes
all_posts.sort! { |a, b| a[:datetime] <=> b[:datetime]}.reverse!
[…]
category = post.body.lines.first
date_matches = post.date.match(/(\d{4})-(\d{2})-(\d{2})/)
date = Date.new(date_matches[1].to_i, date_matches[2].to_i, date_matches[3].to_i).to_time.utc.to_i
title = post.title
content = _markdown(post.body.lines[2..-1].join)

Then I removed all the md files from my project, since I won't be needing them anymore.

That's about it: I can add new posts, modify old posts, delete posts and even add posts dated in the past. And all of this without having to push new commits, while mobile if desired and by writing md files. ==Awesome!==

As usual, much easier than I expected before I started ... Stop thinking it's too hard! strikes again.

Update, 28 Aug 2013: Added a little Ruby script to the Hazel rule, so I can keep a local count of the rule triggers. Why not merge the HTTP request in here? Because if the curl action doesn't complete, Hazel will not continue onto the Ruby action: no sync, no count.

c = File.readlines('Syncs.txt').count - 1
date =  Time.now.strftime('%Y-%m-%d %I:%M %p')
body = c.to_s + "\n\n" + date.to_s + "\n" + File.readlines('Syncs.txt')[2..-1].join().to_s
File.open('Syncs.txt', 'w') { |f| f.write(body)}

I also have a table on the server's database where I also keep track of how many times the URL has been accessed; this way I can see how many times the URL has been accessed by other people feeling funny, heh.