Retry task in queue without causing an error in the monitoring

We are using Google App Engine and the built-in TaskQueues to send daily update emails using Mandrill. They are sent at a specific time and there are usually a lot of those. We schedule one task for each email to sent and some of them fail because the Mandrill API times out or we are hitting some rate limit.

Our code looks like this:

@app.route('/worker/send_transactional_email/', methods=['POST'])
def worker_send_transactional_mail():

    payload = json.loads(request.values.get('payload'))

    message = {
        'to': [payload.get('to')],
        'subject': payload.get('subject'),
        'from_name': 'Our App',
        'from_email': "noreply@ourapp.local",
        'text': payload.get('body_text')

    mandrill_payload = _get_mandrill_payload(message)

    url = ""
    req = urllib2.Request(url, mandrill_payload, {'Content-Type': 'application/json'})
    except urllib2.URLError as error:"to: " + unicode(payload.get('to')))"subject: " + unicode(payload.get('subject')))
    return 'ok'

The solution works pretty well as the tasks which fail will retry later on and usually complete then. The only problem we face is that the failing tasks show up in the error log and cause the Google Cloud Monitoring to report errors.

What I'd like is to replace the abort(500) in the except block above with something that tells the TaskQueue to retry the task but not to log any error or something. I know I can return any status code other than 200-299 and the TaskQueue will retry but I'm not sure the right way is to return a 301 or something because it's highly misleading.

Read more here:

Content Attribution

This content was originally published by Sgoettschkes at Recent Questions - Stack Overflow, and is syndicated here via their RSS feed. You can read the original post over there.

%d bloggers like this: