Closed davidedc closed 11 years ago
I've read your readme more carefully now - makes sense.
Why not "just" dump the evernote xml so one can use the import functionality of Evernote? That would be a workaround to the email limits as welll...
Hmm, yeah, that might work better, especially since it looks like Evernote (apparently) is what is adding those unclosed tags to the notes that are submitted via e-mail, making it hard to figure out how to get back to the original. I'll take a look at doing that tonight.
Hi,
so I've tinkered a bit with a .enex exporter myself. Exporting html is a nightmare because Evernote seems to block on import of non-allowed tags. So I thought to export to markdown instead. The URL is now saved in the proper URL field, so that's good at least. Still doesn't manage to import ALL of my notes - it gets stuck at 230 of 3000... still don't understand what's troubling it. Also I'm cropping the lenght of the note to the first 1000 caracters because looong notes again seem to trouble Evernote.
P.S. the html to markdown code is from http://www.aaronsw.com/2002/html2text/ and unashamedly dropped in the code. Aweful aweful coding, but with a diff tool it should kind of make sense. Sorry I'm pasting it here like this, I don't have time now to do a proper fork etc. etc. Just thought to be useful to paste it here in case some bits can be reused...
Source code below, I invoke it by doing:
python export_gr2evernote.py -e davidedc78.43657 -g davidedc > exportStarredItems.enex
# A script for exporting all starred items from Google Reader to Evernote,
# using exported JSON data from Google's Takeout and Evernote's
# note emailing feature.
#
# Copyright 2013 Paul Kerchen
#
# This program is distributed under the terms of the GNU General Public License v3.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import smtplib
import json
import io
import getopt, sys
import getpass
import os.path
import pickle
try:
True
except NameError:
setattr(__builtins__, 'True', 1)
setattr(__builtins__, 'False', 0)
def has_key(x, y):
if hasattr(x, 'has_key'): return x.has_key(y)
else: return y in x
try:
import htmlentitydefs
import urlparse
import HTMLParser
except ImportError: #Python3
import html.entities as htmlentitydefs
import urllib.parse as urlparse
import html.parser as HTMLParser
try: #Python3
import urllib.request as urllib
except:
import urllib
import optparse, re, sys, codecs, types
try: from textwrap import wrap
except: pass
# Use Unicode characters instead of their ascii psuedo-replacements
UNICODE_SNOB = 0
# Put the links after each paragraph instead of at the end.
LINKS_EACH_PARAGRAPH = 0
# Wrap long lines at position. 0 for no wrapping. (Requires Python 2.3.)
BODY_WIDTH = 78
# Don't show internal links (href="#local-anchor") -- corresponding link targets
# won't be visible in the plain text file anyway.
SKIP_INTERNAL_LINKS = True
# Use inline, rather than reference, formatting for images and links
INLINE_LINKS = True
# Number of pixels Google indents nested lists
GOOGLE_LIST_INDENT = 36
IGNORE_ANCHORS = False
IGNORE_IMAGES = False
### Entity Nonsense ###
def name2cp(k):
if k == 'apos': return ord("'")
if hasattr(htmlentitydefs, "name2codepoint"): # requires Python 2.3
return htmlentitydefs.name2codepoint[k]
else:
k = htmlentitydefs.entitydefs[k]
if k.startswith("&#") and k.endswith(";"): return int(k[2:-1]) # not in latin-1
return ord(codecs.latin_1_decode(k)[0])
unifiable = {'rsquo':"'", 'lsquo':"'", 'rdquo':'"', 'ldquo':'"',
'copy':'(C)', 'mdash':'--', 'nbsp':' ', 'rarr':'->', 'larr':'<-', 'middot':'*',
'ndash':'-', 'oelig':'oe', 'aelig':'ae',
'agrave':'a', 'aacute':'a', 'acirc':'a', 'atilde':'a', 'auml':'a', 'aring':'a',
'egrave':'e', 'eacute':'e', 'ecirc':'e', 'euml':'e',
'igrave':'i', 'iacute':'i', 'icirc':'i', 'iuml':'i',
'ograve':'o', 'oacute':'o', 'ocirc':'o', 'otilde':'o', 'ouml':'o',
'ugrave':'u', 'uacute':'u', 'ucirc':'u', 'uuml':'u',
'lrm':'', 'rlm':''}
unifiable_n = {}
for k in unifiable.keys():
unifiable_n[name2cp(k)] = unifiable[k]
def charref(name):
if name[0] in ['x','X']:
c = int(name[1:], 16)
else:
c = int(name)
if not UNICODE_SNOB and c in unifiable_n.keys():
return unifiable_n[c]
else:
try:
return unichr(c)
except NameError: #Python3
return chr(c)
def entityref(c):
if not UNICODE_SNOB and c in unifiable.keys():
return unifiable[c]
else:
try: name2cp(c)
except KeyError: return "&" + c + ';'
else:
try:
return unichr(name2cp(c))
except NameError: #Python3
return chr(name2cp(c))
def replaceEntities(s):
s = s.group(1)
if s[0] == "#":
return charref(s[1:])
else: return entityref(s)
r_unescape = re.compile(r"&(#?[xX]?(?:[0-9a-fA-F]+|\w{1,8}));")
def unescape(s):
return r_unescape.sub(replaceEntities, s)
### End Entity Nonsense ###
def onlywhite(line):
"""Return true if the line does only consist of whitespace characters."""
for c in line:
if c is not ' ' and c is not ' ':
return c is ' '
return line
def optwrap(text):
"""Wrap all paragraphs in the provided text."""
if not BODY_WIDTH:
return text
assert wrap, "Requires Python 2.3."
result = ''
newlines = 0
for para in text.split("\n"):
if len(para) > 0:
if para[0] != ' ' and para[0] != '-' and para[0] != '*':
for line in wrap(para, BODY_WIDTH):
result += line + "\n"
result += "\n"
newlines = 2
else:
if not onlywhite(para):
result += para + "\n"
newlines = 1
else:
if newlines < 2:
result += "\n"
newlines += 1
return result
def hn(tag):
if tag[0] == 'h' and len(tag) == 2:
try:
n = int(tag[1])
if n in range(1, 10): return n
except ValueError: return 0
def dumb_property_dict(style):
"""returns a hash of css attributes"""
return dict([(x.strip(), y.strip()) for x, y in [z.split(':', 1) for z in style.split(';') if ':' in z]]);
def dumb_css_parser(data):
"""returns a hash of css selectors, each of which contains a hash of css attributes"""
# remove @import sentences
importIndex = data.find('@import')
while importIndex != -1:
data = data[0:importIndex] + data[data.find(';', importIndex) + 1:]
importIndex = data.find('@import')
# parse the css. reverted from dictionary compehension in order to support older pythons
elements = [x.split('{') for x in data.split('}') if '{' in x.strip()]
elements = dict([(a.strip(), dumb_property_dict(b)) for a, b in elements])
return elements
def element_style(attrs, style_def, parent_style):
"""returns a hash of the 'final' style attributes of the element"""
style = parent_style.copy()
if 'class' in attrs:
for css_class in attrs['class'].split():
css_style = style_def['.' + css_class]
style.update(css_style)
if 'style' in attrs:
immediate_style = dumb_property_dict(attrs['style'])
style.update(immediate_style)
return style
def google_list_style(style):
"""finds out whether this is an ordered or unordered list"""
if 'list-style-type' in style:
list_style = style['list-style-type']
if list_style in ['disc', 'circle', 'square', 'none']:
return 'ul'
return 'ol'
def google_nest_count(style):
"""calculate the nesting count of google doc lists"""
nest_count = 0
if 'margin-left' in style:
nest_count = int(style['margin-left'][:-2]) / GOOGLE_LIST_INDENT
return nest_count
def google_has_height(style):
"""check if the style of the element has the 'height' attribute explicitly defined"""
if 'height' in style:
return True
return False
def google_text_emphasis(style):
"""return a list of all emphasis modifiers of the element"""
emphasis = []
if 'text-decoration' in style:
emphasis.append(style['text-decoration'])
if 'font-style' in style:
emphasis.append(style['font-style'])
if 'font-weight' in style:
emphasis.append(style['font-weight'])
return emphasis
def google_fixed_width_font(style):
"""check if the css of the current element defines a fixed width font"""
font_family = ''
if 'font-family' in style:
font_family = style['font-family']
if 'Courier New' == font_family or 'Consolas' == font_family:
return True
return False
def list_numbering_start(attrs):
"""extract numbering from list element attributes"""
if 'start' in attrs:
return int(attrs['start']) - 1
else:
return 0
class _html2text(HTMLParser.HTMLParser):
def __init__(self, out=None, baseurl=''):
HTMLParser.HTMLParser.__init__(self)
if out is None: self.out = self.outtextf
else: self.out = out
self.outtextlist = [] # empty list to store output characters before they are "joined"
try:
self.outtext = unicode()
except NameError: # Python3
self.outtext = str()
self.quiet = 0
self.p_p = 0 # number of newline character to print before next output
self.outcount = 0
self.start = 1
self.space = 0
self.a = []
self.astack = []
self.acount = 0
self.list = []
self.blockquote = 0
self.pre = 0
self.startpre = 0
self.code = False
self.br_toggle = ''
self.lastWasNL = 0
self.lastWasList = False
self.style = 0
self.style_def = {}
self.tag_stack = []
self.emphasis = 0
self.drop_white_space = 0
self.inheader = False
self.abbr_title = None # current abbreviation definition
self.abbr_data = None # last inner HTML (for abbr being defined)
self.abbr_list = {} # stack of abbreviations to write later
self.baseurl = baseurl
if options.google_doc:
del unifiable_n[name2cp('nbsp')]
unifiable['nbsp'] = ' _place_holder;'
def feed(self, data):
data = data.replace("</' + 'script>", "</ignore>")
HTMLParser.HTMLParser.feed(self, data)
def outtextf(self, s):
self.outtextlist.append(s)
if s: self.lastWasNL = s[-1] == '\n'
def close(self):
HTMLParser.HTMLParser.close(self)
self.pbr()
self.o('', 0, 'end')
self.outtext = self.outtext.join(self.outtextlist)
if options.google_doc:
self.outtext = self.outtext.replace(' _place_holder;', ' ');
return self.outtext
def handle_charref(self, c):
self.o(charref(c), 1)
def handle_entityref(self, c):
self.o(entityref(c), 1)
def handle_starttag(self, tag, attrs):
self.handle_tag(tag, attrs, 1)
def handle_endtag(self, tag):
self.handle_tag(tag, None, 0)
def previousIndex(self, attrs):
""" returns the index of certain set of attributes (of a link) in the
self.a list
If the set of attributes is not found, returns None
"""
if not has_key(attrs, 'href'): return None
i = -1
for a in self.a:
i += 1
match = 0
if has_key(a, 'href') and a['href'] == attrs['href']:
if has_key(a, 'title') or has_key(attrs, 'title'):
if (has_key(a, 'title') and has_key(attrs, 'title') and
a['title'] == attrs['title']):
match = True
else:
match = True
if match: return i
def drop_last(self, nLetters):
if not self.quiet:
self.outtext = self.outtext[:-nLetters]
def handle_emphasis(self, start, tag_style, parent_style):
"""handles various text emphases"""
tag_emphasis = google_text_emphasis(tag_style)
parent_emphasis = google_text_emphasis(parent_style)
# handle Google's text emphasis
strikethrough = 'line-through' in tag_emphasis and options.hide_strikethrough
bold = 'bold' in tag_emphasis and not 'bold' in parent_emphasis
italic = 'italic' in tag_emphasis and not 'italic' in parent_emphasis
fixed = google_fixed_width_font(tag_style) and not \
google_fixed_width_font(parent_style) and not self.pre
if start:
# crossed-out text must be handled before other attributes
# in order not to output qualifiers unnecessarily
if bold or italic or fixed:
self.emphasis += 1
if strikethrough:
self.quiet += 1
if italic:
self.o("_")
self.drop_white_space += 1
if bold:
self.o("**")
self.drop_white_space += 1
if fixed:
self.o('`')
self.drop_white_space += 1
self.code = True
else:
if bold or italic or fixed:
# there must not be whitespace before closing emphasis mark
self.emphasis -= 1
self.space = 0
self.outtext = self.outtext.rstrip()
if fixed:
if self.drop_white_space:
# empty emphasis, drop it
self.drop_last(1)
self.drop_white_space -= 1
else:
self.o('`')
self.code = False
if bold:
if self.drop_white_space:
# empty emphasis, drop it
self.drop_last(2)
self.drop_white_space -= 1
else:
self.o("**")
if italic:
if self.drop_white_space:
# empty emphasis, drop it
self.drop_last(1)
self.drop_white_space -= 1
else:
self.o("_")
# space is only allowed after *all* emphasis marks
if (bold or italic) and not self.emphasis:
self.o(" ")
if strikethrough:
self.quiet -= 1
def handle_tag(self, tag, attrs, start):
#attrs = fixattrs(attrs)
if attrs is None:
attrs = {}
else:
attrs = dict(attrs)
if options.google_doc:
# the attrs parameter is empty for a closing tag. in addition, we
# need the attributes of the parent nodes in order to get a
# complete style description for the current element. we assume
# that google docs export well formed html.
parent_style = {}
if start:
if self.tag_stack:
parent_style = self.tag_stack[-1][2]
tag_style = element_style(attrs, self.style_def, parent_style)
self.tag_stack.append((tag, attrs, tag_style))
else:
dummy, attrs, tag_style = self.tag_stack.pop()
if self.tag_stack:
parent_style = self.tag_stack[-1][2]
if hn(tag):
self.p()
if start:
self.inheader = True
self.o(hn(tag)*"#" + ' ')
else:
self.inheader = False
return # prevent redundant emphasis marks on headers
if tag in ['p', 'div']:
if options.google_doc:
if start and google_has_height(tag_style):
self.p()
else:
self.soft_br()
else:
self.p()
if tag == "br" and start: self.o(" \n")
if tag == "hr" and start:
self.p()
self.o("* * *")
self.p()
if tag in ["head", "style", 'script']:
if start: self.quiet += 1
else: self.quiet -= 1
if tag == "style":
if start: self.style += 1
else: self.style -= 1
if tag in ["body"]:
self.quiet = 0 # sites like 9rules.com never close <head>
if tag == "blockquote":
if start:
self.p(); self.o('> ', 0, 1); self.start = 1
self.blockquote += 1
else:
self.blockquote -= 1
self.p()
if tag in ['em', 'i', 'u']: self.o("_")
if tag in ['strong', 'b']: self.o("**")
if tag in ['del', 'strike']:
if start:
self.o("<"+tag+">")
else:
self.o("</"+tag+">")
if options.google_doc:
if not self.inheader:
# handle some font attributes, but leave headers clean
self.handle_emphasis(start, tag_style, parent_style)
if tag == "code" and not self.pre: self.o('`') #TODO: `` `this` ``
if tag == "abbr":
if start:
self.abbr_title = None
self.abbr_data = ''
if has_key(attrs, 'title'):
self.abbr_title = attrs['title']
else:
if self.abbr_title != None:
self.abbr_list[self.abbr_data] = self.abbr_title
self.abbr_title = None
self.abbr_data = ''
if tag == "a" and not IGNORE_ANCHORS:
if start:
if has_key(attrs, 'href') and not (SKIP_INTERNAL_LINKS and attrs['href'].startswith('#')):
self.astack.append(attrs)
self.o("[")
else:
self.astack.append(None)
else:
if self.astack:
a = self.astack.pop()
if a:
if INLINE_LINKS:
self.o("](" + a['href'] + ")")
else:
i = self.previousIndex(a)
if i is not None:
a = self.a[i]
else:
self.acount += 1
a['count'] = self.acount
a['outcount'] = self.outcount
self.a.append(a)
self.o("][" + str(a['count']) + "]")
if tag == "img" and start and not IGNORE_IMAGES:
if has_key(attrs, 'src'):
attrs['href'] = attrs['src']
alt = attrs.get('alt', '')
if INLINE_LINKS:
self.o("![")
self.o(alt)
self.o("]("+ attrs['href'] +")")
else:
i = self.previousIndex(attrs)
if i is not None:
attrs = self.a[i]
else:
self.acount += 1
attrs['count'] = self.acount
attrs['outcount'] = self.outcount
self.a.append(attrs)
self.o("![")
self.o(alt)
self.o("]["+ str(attrs['count']) +"]")
if tag == 'dl' and start: self.p()
if tag == 'dt' and not start: self.pbr()
if tag == 'dd' and start: self.o(' ')
if tag == 'dd' and not start: self.pbr()
if tag in ["ol", "ul"]:
# Google Docs create sub lists as top level lists
if (not self.list) and (not self.lastWasList):
self.p()
if start:
if options.google_doc:
list_style = google_list_style(tag_style)
else:
list_style = tag
numbering_start = list_numbering_start(attrs)
self.list.append({'name':list_style, 'num':numbering_start})
else:
if self.list: self.list.pop()
self.lastWasList = True
else:
self.lastWasList = False
if tag == 'li':
self.pbr()
if start:
if self.list: li = self.list[-1]
else: li = {'name':'ul', 'num':0}
if options.google_doc:
nest_count = google_nest_count(tag_style)
else:
nest_count = len(self.list)
self.o(" " * nest_count) #TODO: line up <ol><li>s > 9 correctly.
if li['name'] == "ul": self.o(options.ul_item_mark + " ")
elif li['name'] == "ol":
li['num'] += 1
self.o(str(li['num'])+". ")
self.start = 1
if tag in ["table", "tr"] and start: self.p()
if tag == 'td': self.pbr()
if tag == "pre":
if start:
self.startpre = 1
self.pre = 1
else:
self.pre = 0
self.p()
def pbr(self):
if self.p_p == 0: self.p_p = 1
def p(self): self.p_p = 2
def soft_br(self):
self.pbr()
self.br_toggle = ' '
def o(self, data, puredata=0, force=0):
if self.abbr_data is not None: self.abbr_data += data
if not self.quiet:
if options.google_doc:
# prevent white space immediately after 'begin emphasis' marks ('**' and '_')
lstripped_data = data.lstrip()
if self.drop_white_space and not (self.pre or self.code):
data = lstripped_data
if lstripped_data != '':
self.drop_white_space = 0
if puredata and not self.pre:
data = re.sub('\s+', ' ', data)
if data and data[0] == ' ':
self.space = 1
data = data[1:]
if not data and not force: return
if self.startpre:
#self.out(" :") #TODO: not output when already one there
self.startpre = 0
bq = (">" * self.blockquote)
if not (force and data and data[0] == ">") and self.blockquote: bq += " "
if self.pre:
bq += " "
data = data.replace("\n", "\n"+bq)
if self.start:
self.space = 0
self.p_p = 0
self.start = 0
if force == 'end':
# It's the end.
self.p_p = 0
self.out("\n")
self.space = 0
if self.p_p:
self.out((self.br_toggle+'\n'+bq)*self.p_p)
self.space = 0
self.br_toggle = ''
if self.space:
if not self.lastWasNL: self.out(' ')
self.space = 0
if self.a and ((self.p_p == 2 and LINKS_EACH_PARAGRAPH) or force == "end"):
if force == "end": self.out("\n")
newa = []
for link in self.a:
if self.outcount > link['outcount']:
self.out(" ["+ str(link['count']) +"]: " + urlparse.urljoin(self.baseurl, link['href']))
if has_key(link, 'title'): self.out(" ("+link['title']+")")
self.out("\n")
else:
newa.append(link)
if self.a != newa: self.out("\n") # Don't need an extra line when nothing was done.
self.a = newa
if self.abbr_list and force == "end":
for abbr, definition in self.abbr_list.items():
self.out(" *[" + abbr + "]: " + definition + "\n")
self.p_p = 0
self.out(data)
self.outcount += 1
def handle_data(self, data):
if r'\/script>' in data: self.quiet -= 1
if self.style:
self.style_def.update(dumb_css_parser(data))
self.o(data, 1)
def unknown_decl(self, data): pass
def wrapwrite(text):
text = text.encode('utf-8')
try: #Python3
sys.stdout.buffer.write(text)
except AttributeError:
sys.stdout.write(text)
def html2text_file(html, out=wrapwrite, baseurl=''):
h = _html2text(out, baseurl)
h.feed(html)
h.feed("")
return h.close()
def html2text(html, baseurl=''):
return optwrap(html2text_file(html, None, baseurl))
class Storage: pass
options = Storage()
options.google_doc = False
options.ul_item_mark = '*'
def usage():
print "\nOptions:"
print "-c, --continue: Pick up where the last export left off, using the same parameters from"
print " the last export. If other options are specified, they will override any parameters"
print " from the last export."
print "-e, --evernote-user: The Evernote email username (NOT the Evernote username) to send messages to. [required]"
print " Username only; do not include the '@m.evernote.com'!"
print "-g, --gmail-user: The gmail username to send messages from. [required]"
print " Username only; do not include the '@gmail.com'!"
print "-m, --maximum: The maximum number of messages that should be sent."
print " If you do not specify a maximum, all messages will be sent."
print " Note that Evernote limits the number of notes that can be added via e-mail in a single day."
print " For free accounts, the limit is 50; for premium accounts, it is 250."
print "-n, --notebook: The name of the Evernote notebook to put sent notes in."
print " If you do not specify a notebook, sent notes will be put in the default notebook."
print "-s, --skip: The number of articles to skip before sending the first e-mail message."
print " Useful for picking up where you left off from the previous day if you"
print " ran into Evernote's e-mail submission daily limit."
print "-h, --help: Print this message and exit."
print
print "When prompted for a password, enter the password for the sender gmail address."
print "It is expected that the exported starred items are in a file named 'starred_json' in the current working directory."
try:
opts, args = getopt.getopt( sys.argv[1:], "ce:m:n:g:s:h", ["continue","evernote-user=","maximum=","notebook=","gmail-user=","skip=","help"])
except getopt.GetoptError as err:
print str(err)
usage()
sys.exit(2)
sender_user = ""
evernote_user = ""
notebook = ""
message_limit = -1
skip_count = 0
continue_from_prev = False
for o, a in opts:
if o in ("-c", "--continue"):
continue_from_prev = True
elif o in ("-g", "--gmail-user"):
sender_user = a
elif o in ("-e", "--evernote-user"):
evernote_user = a
elif o in ("-m", "--maximum"):
message_limit = int(a)
elif o in ("-n", "--notebook"):
notebook = a
elif o in ("-s", "--skip"):
skip_count = int(a)
elif o in ("-h", "--help"):
usage()
sys.exit()
if continue_from_prev:
if not os.path.exists("continuation.txt"):
print "Continuation data file not found; cannot continue."
sys.exit()
last_session_data = open("continuation.txt")
if not last_session_data.closed:
val = pickle.load( last_session_data ) # skip count
if skip_count == 0:
skip_count = val
val = pickle.load( last_session_data ) # limit
if message_limit == -1:
message_limit = val
val = pickle.load( last_session_data ) # notebook
if not notebook:
notebook = val
val = pickle.load( last_session_data ) # sender
if not sender_user:
sender_user = val
val = pickle.load( last_session_data ) # evernote username
if not evernote_user:
evernote_user = val
print "Continuing with:"
print " Skip count: %d" % skip_count
print " Message limit: %d" % message_limit
print " Notebook: %s" % notebook
print " gmail username: %s" % sender_user
print " Evernote username: %s" % evernote_user
else:
print "Continuation data file cannot be opened; cannot continue."
sys.exit()
if not sender_user or not evernote_user:
print "Missing required parameter."
usage()
sys.exit()
sender_addr = sender_user + "@gmail.com"
evernote_addr = evernote_user + "@m.evernote.com"
FROM = sender_user
TO = [evernote_addr] #must be a list
json_file = open("starred.json")
json_dict = json.loads( unicode(json_file.read(), encoding="utf-8") )
item_list = json_dict[ "items" ]
#print "Number of articles found in json export: %d" % len(item_list)
if message_limit < 0:
message_limit = len(item_list)
#print "Number of notes to be added to Evernote: %d" % message_limit
#if message_limit > 50:
# print "Warning: if you have a free account, adding more than 50 notes in one day will most likely fail."
#if message_limit > 250:
# print "Warning: adding more than 250 notes in one day will most likely fail."
#if skip_count > 0:
#print "The first %d articles will be skipped" % skip_count
#sender_pwd = getpass.getpass()
original_message_limit = message_limit
sent_count = 0
fail_count = 0
article_num = 0
note_count = 0
print('<?xml version="1.0" encoding="UTF-8"?>')
print('<!DOCTYPE en-export SYSTEM "http://xml.evernote.com/pub/evernote-export2.dtd">')
print('<en-export export-date="20130320T150950Z" application="Evernote" version="Evernote Mac 5.0.6 (400960)">')
for s in item_list:
article_num = article_num + 1
if skip_count > 0:
skip_count = skip_count - 1
continue
note_count = note_count + 1
subject = ""
if 'title' in s.keys():
subject = unicode(s["title"]).encode('ascii', 'replace')
if notebook:
subject = subject + " @" + notebook
msg_body = ""
msg_body = msg_body + '<note><title>'+subject+'</title><content><![CDATA[<?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd"> <en-note>'
msg_url = ""
if 'canonical' in s.keys():
d = s["canonical"][0]
msg_url = unicode(d["href"]).encode('ascii', 'replace')
#msg_body = msg_body + "URL: " + msg_url + "\r\n"
if 'alternate' in s.keys():
d = s["alternate"][0]
msg_url = unicode(d["href"]).encode('ascii', 'replace')
#msg_body = msg_body + "Alt URL:" + msg_url + "\r\n"
if 'summary' in s.keys():
d = s["summary"]
#msg_body = msg_body + "Summary: " + unicode(d["content"]).encode('ascii', 'replace') + "\r\n"
if 'content' in s.keys():
d = s["content"]
msg_body = msg_body + html2text(unicode(d["content"]).encode('ascii', 'replace'))
msg_body = (msg_body[:1000] + '..') if len(msg_body) > 1000 else msg_body
msg_body = msg_body + "</en-note>]]>\r\n</content>\r\n"
msg_body = msg_body + "<note-attributes><source>web.clip</source><source-url>" + msg_url + "</source-url></note-attributes>"
msg_body = msg_body + "</note>\r\n"
print(msg_body)
message_limit = message_limit - 1
if message_limit < 1:
break
print('</en-export>')
cont_file = open( "continuation.txt", "w" )
# Write new skip count, message count,
pickle.dump( article_num, cont_file ) # New skip count = number of last-sent article
pickle.dump( original_message_limit, cont_file )
pickle.dump( notebook, cont_file )
pickle.dump( sender_user, cont_file )
pickle.dump( evernote_user, cont_file )
cont_file.close()
#print "Continuation data saved to 'continuation.txt'"
Wow, cool! I tried this, but it just produced notes with no bodies. However, I took your basic approach and made a new script (export2enex.py) that seems to work pretty well for me on Windows. I did an import of my 579 articles, and a spot check of a couple dozen all show that they exported correctly to enex and then imported into Evernote, complete with proper formatting, embedded images, etc.
Hmmm I'll try with the windows client this afternoon, still giving me the same grief on the mac client. Maybe the mac client is being too fussy...
Thanks for the work so far! Getting the same result on my mac - Notes with all metadata but "invalid content".
Everything appears in evernote but the content - and the notes won't sync.
Have compared the XML from a valid note created by evernote and exported versus one created by the script and am so far unable to see any differences in syntax or errors - so pretty confused.
I don't have a Mac, but I have a friend who does and he's a dab hand at python, so I'll see if I can get him to take a look.
So, my friend tried it on his Mac, and he got the same results (namely, notes with metadata but empty/invalid content). We even tried importing the .enex file that I successfully imported on my Windows machine, but the results were the same. So, it looks like the problem is with the Evernote client on the Mac.
Hi - the new .enex-generating method also fails for me on WIndows (and OSX as already mentioned - my JSON contains more than 3000 entries from around 200 different feeds). First error I got was that some titles were empty. Fixed that by adding "Untitled" string for empty titles. Still got the same error. So I started bi-secting the entries to see what the problem was. First half still same problem, second half a different problem (an unrecognised character (???)). Doing further research on other projects attempting (but not fully managing) to do the same, I lost confidence that this XML reverse-engineering can be made to work reliably for a wide array of entries (titles can/can't have tags, length limits, plus a plethora of limitations on allowed html tags and attributes).
So I decided instead to try the following approach: just generate html files from the JSON. One can then just drag-drop the html files into Evernote - that seems to work fine, as Evernote does its own validation and transformation in its own "purified" HTML subset. All special characters are preserved, all images preserved. The only drawback that I can tell is that the URL metadata is now in the body of the note instead of being in its proper place as per the .enex method.
Pull request is here: https://github.com/kerchen/export_gr2evernote/pull/4
With the two new ways if import this is probably resolved, closing.
Hi, the evernote notes from the starred items all look like this:
(text version added at the end of this issue)
This is the latest version of Evernote on OS X 10.8.2.
Is this intended behaviour? (The unopened makes me think maybe it isn't...)
Alt URL:http://ffffound.com/image/58a1b400a08be5aff0d7f8c05f1c467dd3199118 Summary:
via http://dropanchors.tumblr.com/