Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
379 views
in Technique[技术] by (71.8m points)

ruby on rails - Why Doesn't My Cron Job Work Properly?

I have a cron job on an Ubuntu Hardy VPS that only half works and I can't work out why. The job is a Ruby script that uses mysqldump to back up a MySQL database used by a Rails application, which is then gzipped and uploaded to a remote server using SFTP.

The gzip file is created and copied successfully but it's always zero bytes. Yet if I run the cron command directly from the command line it works perfectly.

This is the cron job:

PATH=/usr/bin
10 3 * * * ruby /home/deploy/bin/datadump.rb

This is datadump.rb:

#!/usr/bin/ruby
require 'yaml'
require 'logger'
require 'rubygems'
require 'net/ssh'
require 'net/sftp'

APP        = '/home/deploy/apps/myapp/current'
LOGFILE    = '/home/deploy/log/data.log'
TIMESTAMP  = '%Y%m%d-%H%M'
TABLES     = 'table1 table2'

log        = Logger.new(LOGFILE, 5, 10 * 1024)
dump       = "myapp-#{Time.now.strftime(TIMESTAMP)}.sql.gz"
ftpconfig  = YAML::load(open('/home/deploy/apps/myapp/shared/config/sftp.yml'))
config     = YAML::load(open(APP + '/config/database.yml'))['production']
cmd        = "mysqldump -u #{config['username']} -p#{config['password']} -h #{config['host']} --add-drop-table --add-locks --extended-insert --lock-tables #{config['database']} #{TABLES} | gzip -cf9 > #{dump}"

log.info 'Getting ready to create a backup'
`#{cmd}`    

# Strongspace
log.info 'Backup created, starting the transfer to Strongspace'
Net::SSH.start(ftpconfig['strongspace']['host'], ftpconfig['strongspace']['username'], ftpconfig['strongspace']['password']) do |ssh|
  ssh.sftp.connect do |sftp|
    sftp.open_handle("#{ftpconfig['strongspace']['dir']}/#{dump}", 'w') do |handle|
      sftp.write(handle, open("#{dump}").read)
    end
  end
end
log.info 'Finished transferring backup to Strongspace'

log.info 'Removing local file'
cmd       = "rm -f #{dump}" 
log.debug "Executing: #{cmd}"
`#{cmd}`
log.info 'Local file removed'

I've checked and double-checked all the paths and they're correct. Both sftp.yml (SFTP credentials) and database.yml (MySQL credentials) are owned by the executing user (deploy) with read-only permissions for that user (chmod 400). I'm using the 1.1.x versions of net-ssh and net-sftp. I know they're not the latest, but they're what I'm familiar with at the moment.

What could be causing the cron job to fail?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

When scripts run correctly interactively but not when run by cron, the problem is usually because of the environment environment settings in place ... for example the PATH as alrady mentioned by @Ted Percival, but may be other environment variables.

This is because cron will not invoke .bash_profile, .bashrc or /etc/profile before executing.

The best way to avoid this is to ensure any scripts invoked by cron do not make any assumptions about the environment when executing. Over-coming this can be as simple as including a few lines in your script to make sure the environment is setup properly. For example, in my case I have all the significant settings in /etc/profile (for RHEL), so I will include the following line in any scripts to be run under cron:

source /etc/profile

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...