Compare commits

..

39 Commits

Author SHA1 Message Date
Phyks
565a5c6557 Fix error in known.py 2015-01-02 18:44:19 +01:00
Phyks
1cd86d9e47 Add known upload script 2015-01-02 18:40:31 +01:00
Phyks
11da6ecbba Fix issues 2014-10-18 22:45:56 +02:00
Phyks
a6eb3e0c2c Typography for markdown files 2014-08-07 22:38:43 +02:00
Phyks
0ef7a576e4 Update for markdown support 2014-08-01 00:45:47 +02:00
Phyks
2d1cc39d68 Fix RSS + Markdown support 2014-07-30 15:30:21 +02:00
Phyks
e63bf27c2f Remaining bug with URLs in RSS feed 2014-07-17 23:50:26 +02:00
Phyks
24cb3a9182 Forgot that internal tags (@author and so) will end in the RSS description… Fixed 2014-07-13 01:08:14 +02:00
Phyks
0f15a2b471 Update RSS feed to make it valid 2014-07-12 15:33:27 +02:00
Phyks
dc9e9a10af Correct RFC822 for dates 2014-07-11 13:35:30 +02:00
Phyks
47ef046e55 Update README 2014-04-20 22:51:57 +02:00
Phyks
1d845d4db2 Bugfixes in RSS and tag pages
* Correct article ordering by date in tags pages
* Clickable links in RSS
2014-04-13 19:10:37 +02:00
Phyks
b659014bbc Typo in README 2014-01-23 01:12:17 +01:00
Phyks
f296a8b771 README well formatted 2014-01-23 01:04:18 +01:00
Phyks
e1df874bad README file updated 2014-01-23 01:03:22 +01:00
Phyks
56cea4be90 README file added 2014-01-23 00:59:44 +01:00
Phyks
c7edcd3fed Added experimental support for markdown 2014-01-22 02:24:20 +01:00
Phyks
73271e28b8 Bug correction in URL parameter (trailing /) 2014-01-22 01:45:20 +01:00
Phyks
eef8221073 Added ability to link stylesheet to RSS file + flake8 compliant 2013-12-06 15:07:26 +01:00
Phyks
229c7801db Bug fixes :
* Problem in RSS : bad title because of html tags + bad link
* New display for tags
2013-12-02 15:23:41 +01:00
Phyks
ce95b42138 More semantics in HTML output 2013-11-24 22:54:29 +01:00
Phyks
bb2e831cde Many bug correction 2013-11-23 21:00:06 +01:00
Phyks
dbd6cbac21 Bug fix in header : @titre instead of @title 2013-11-17 00:38:05 +01:00
Phyks
df99a53e24 Update des paramètres par défaut. 2013-11-17 00:34:22 +01:00
Phyks
91b1e0396e Bugs corrections and added the design of my own blog as demo code 2013-11-17 00:32:35 +01:00
Phyks
0c99b44c9a Test 2013-11-17 00:19:21 +01:00
Phyks
5bb9e3af05 Added tag list at the end of articles 2013-10-27 20:41:23 +01:00
Phyks
d5075a0d2c Added support for external includes for header and footer in static pages. 2013-10-27 20:15:46 +01:00
Phyks
9dce2689d0 RSS + Tags
* RSS is fully functional
* Images for tags are automatically added
2013-10-27 14:14:10 +01:00
Phyks
ecfab3c7b1 Bug correction + RSS feed code completed 2013-10-27 14:01:24 +01:00
Phyks
b2bef345f2 Bug corrections 2013-10-22 23:42:21 +02:00
Phyks
548ad16f7b Some comments in the code 2013-10-22 22:39:11 +02:00
Phyks
7b44e77b3a Now sort articles by date in them and not by system date (possibility to postdate an article for example) 2013-07-28 16:23:17 +02:00
Phyks
0102b7f66c Localized months names 2013-07-28 12:37:41 +02:00
Phyks
30f0353e50 Added an archive page 2013-07-27 23:00:42 +02:00
Phyks
a551cbd4e3 Still bug corrections... Tests continue :) 2013-07-27 22:21:43 +02:00
Phyks
43f7621f7f Bug correction, now using git ls built-in function instead of directory listing to avoir problems with not added files. TODO : Error while generating articles html and month pages 2013-07-27 21:59:48 +02:00
Phyks
fc5c8c3b1f Continuing bug correction : errors in index generation 2013-07-27 21:23:50 +02:00
Phyks
a4e9772f25 Bug correction and old code left deleted 2013-07-27 21:08:13 +02:00
16 changed files with 1129 additions and 259 deletions

1
.gitignore vendored
View File

@ -1,2 +1,3 @@
*~ *~
*.swp *.swp
blog/

11
LICENSE
View File

@ -1,11 +0,0 @@
/*
* --------------------------------------------------------------------------------
* "THE NO-ALCOHOL BEER-WARE LICENSE" (Revision 42):
* Phyks (webmaster@phyks.me) wrote this file. As long as you retain this notice you
* can do whatever you want with this stuff (and you can also do whatever you want
* with this stuff without retaining it, but that's not cool...). If we meet some
* day, and you think this stuff is worth it, you can buy me a --beer-- soda in
* return.
* Phyks
* ---------------------------------------------------------------------------------
*/

133
README.md
View File

@ -1,53 +1,114 @@
Blogit Blogit
====== ======
A git based blogging software. Just as Jekyll and so, it takes your articles as html files and computes them to generate static page and RSS feed to serve behind a webserver. It uses git as a backend file manager (as git provide some useful features like history and hooks) and Python for scripting the conversion process. You can customize the python scripts to handle special tags (ie, not standard HTML tags) just as <code> for example. See params file in raw dir to modify this. This script aims at building a static blog above a git repo. This way, you can
use git abilities to have full archives and backups of your blog. Many scripts
aims at doing this, and this is just one more. I needed something more personal
and fitted to my needs, so I came up with this code. It's not very beautiful,
can mostly be optimized but it's working for me. It may not be fitted to your
needs, but it's up to you to see it.
This project is still a WIP. This script is just a python script that should be run as a git hook. You can
see a working version at http://phyks.me and the repository behind it is
publicly viewable at http://git.phyks.me/Blog. You should browse this
repository for example configuration and usage if you are interested by this
script.
How it works ? This script has been developped by Phyks (phyks@phyks.me). For any suggestion
============== or remark, please send me an e-mail.
There are three directories under the tree : raw for your raw HTML articles and header/footer, gen (created by the script) for the temporary generated files and blog for the blog folder to serve behind the webserver. ## Installation
Articles must be in folders year/month/ARTICLE.html (ARTICLE is whatever you want) and some extras comments must be put in the article file for the script to handle it correctly. See the test.html example file for more info on how to do it. 1. Clone this repo.
2. Clear the .git directory and initialize a new empty git repo to store your
blog.
3. Move the `pre-commit.py` script to `.git/hooks/pre-commit`
4. Edit the `raw/params` and edit it to fit your needs.
5. The `raw` folder comes with an example of blog architecture. Delete it and
make your own.
6. You are ready to go !
You can put a file in "wait mode" and don't publish it yet, just by adding .ignore at the end of its filename; Every file that you put in raw and that is not a .html file is just copied at the same place in the blog dir (to put images in your articles, for example, just put them beside your articles and make a relative link in your HTML article). ## Params
You should change the params file (raw/params) before starting to correctly set your blog url, your email address and the blog title (among other parameters). Available options in `raw/params` are :
When you finish editing an article, just git add it and commit. The pre-commit.py hook will run automatically and generate your working copy. * `BLOG_TITLE` : The title of the blog, to display in the <title> element
in rendered pages.
* `NB_ARTICLES_INDEX` : Number of articles to display on index page.
* `BLOG_URL` : Your blog base URL.
* `IGNORE_FILES` : A comma-separated list of files to ignore.
* `WEBMASTER` : Webmaster e-mail to put in RSS feed.
* `LANGUAGE` : Language param to put in RSS feed.
* `DESCRIPTION` : Blog description, for the RSS feed.
* `COPYRIGHT` : Copyright information, for the RSS feed.
* `SEARCH` : comma-separated list of elements to search for and replaced
(custom regex, see code for more info)
* `REPLACE` : corresponding elements for replacement.
Note about tags : Tags are automatically handled and a page for each tag is automatically generated. A page with all the articles for each month and each year is also automatically generated. ## Usage
Note : Don't remove gen/ content unless you know what you're doing. These files are temporary files for the blog generation but they are useful to regenerate the RSS feed for example. If you delete them, you may need to regenerate them. This script will use three folders:
Important note : This is currently a beta version and the hook isn't set to run automatically for now. You have to manually run pre-commit.py (or move it to .git/hooks but this has never been tested ^^). * `raw/` will contain your raw files
* `blog/` will contain the fully generated files to serve _via_ your http
server
* `gen/` will store temporary intermediate files
Example of syntax for an article All the files you have to edit are located in the `raw/` folder. It contains by
================================ default an example version of a blog. You should start by renaming it and
```HTML making your own.
<!--
@tags=*** //put here the tags for your article, comma-separated list
@titre=***i //Title for your article
@author=Phyks //Name of the author (not displayed by now)
@date=23062013-1337 //Date in the format DDMMYYYY-HHMM
->
<article content> (== Whatever you want)</article>
```
LICENSE Articles can be edited in HTML directly or in Markdown. They must be located in
======= a `raw/year/month` folder, according to their date of publication and end with
TLDR; I don't give a damn to anything you can do using this code. It would just be nice to .html for html articles and .md for markdown articles. The basic content of an
quote where the original code comes from. article is: ```` <!-- @author=AUTHOR_NAME @date=DDMMYYYY-HHMM @title=TITLE
@tags=TAGS --> CONTENT ````
where TAGS is a comma-separated list. Tags are automatically created if they do
not exist yet. The HTML comment *must* be at the beginning of the document and
is parsed to set the metadata of the article.
CONTENT is then either a HTML string or a markdown formatted one.
You can ignore an article to not make it publicly visible during redaction,
simply by adding a .ignore extension.
* -------------------------------------------------------------------------------- When you finish editing your article, you can add the files to git and commit,
* "THE NO-ALCOHOL BEER-WARE LICENSE" (Revision 42): to launch the script. You can also manually call the script with the
* Phyks (webmaster@phyks.me) wrote this file. As long as you retain this notice you `--force-regen` option if you want to rebuild your entire blog.
* can do whatever you want with this stuff (and you can also do whatever you want
* with this stuff without retaining it, but that's not cool...). If we meet some ## Header file
* day, and you think this stuff is worth it, you can buy me a <del>beer</del> soda
* in return. You can use the `@blog_url` syntax anywhere. It will be replaced by the URL of
* Phyks the blog, as defined in the parameters (and this is useful to include CSS
* --------------------------------------------------------------------------------- etc.).
You can also use `@tags` that will be replaced by the list of tags and
`@articles` for the list of last articles.
## Static files
In static files, in raw folder (such as `divers.html` in the demo code), you
can use `#base_url` that will be replaced by the base url of the blog, as
defined in the parameters. This is useful to make some links.
## Alternatives
There exist many alternatives to this script, but they didn't fit my needs (and
were not all tested) :
* fugitive : http://shebang.ws/fugitive-readme.html
* Jekyll : http://jekyllrb.com/ and Oktopress : http://octopress.org/
* Blogofile : http://www.blogofile.com/
## LICENSE
--------------------------------------------------------------------------------
"THE NO-ALCOHOL BEER-WARE LICENSE" (Revision 42): Phyks
(webmaster@phyks.me) wrote this file. As long as you retain this notice you
can do whatever you want with this stuff (and you can also do whatever you
want with this stuff without retaining it, but that's not cool...). If we
meet some day, and you think this stuff is worth it, you can buy me a
<del>beer</del> soda in return. Phyks
---------------------------------------------------------------------------------

87
known.py Normal file
View File

@ -0,0 +1,87 @@
#!/usr/bin/env python3
from bs4 import BeautifulSoup
from bs4.element import Comment
import os
import base64
import hashlib
import hmac
import requests
import sys
"""
Script to import articles from a blogit blog in Known.
Must be run with a correct API_KEY (see below) and from the `gen/` folder.
"""
def list_directory(path):
fichier = []
for root, dirs, files in os.walk(path):
for i in files:
fichier.append(os.path.join(root, i))
return fichier
def hmac_sha256(message, key):
return base64.b64encode(hmac.new(key.encode("utf-8"),
message.encode("utf-8"),
digestmod=hashlib.sha256)
.digest()).decode("utf-8")
def known_api(username, api_key, type, payload):
headers = {
"X-KNOWN-USERNAME": username,
"X-KNOWN-SIGNATURE": hmac_sha256("/"+type+"/edit", api_key)
}
return requests.post("https://known.phyks.me/"+type+"/edit",
data=payload,
headers=headers)
if len(sys.argv) < 3:
print("Usage: "+sys.argv[0]+" USERNAME API_KEY [file]")
sys.exit()
API_USERNAME = sys.argv[1]
API_KEY = sys.argv[2]
if len(sys.argv) <= 3:
files = [list_directory(i) for i in ["2013", "2014", "2015"]]
else:
files = [sys.argv[3]]
for file in files:
print("Processing file "+file)
with open(file, 'r') as fh:
soup = BeautifulSoup(fh.read())
content = []
for i in soup.div.find('header').next_siblings:
if i.name == "footer":
break
if type(i) != Comment:
content.append(i)
comment = soup.div.findAll(text=lambda text: isinstance(text,
Comment))
comment = [i.strip() for i in comment[0].strip().split('\n')]
for j in comment:
if j.startswith("@title"):
title = j.split("=")[1]
elif j.startswith("@date"):
date = j.split("=")[1]
elif j.startswith("@tags"):
tags = j.split("=")[1]
tags = ', '.join(["#"+i.strip() for i in tags.split(',')])
meta = {
"title": title,
"date": (str(date[4:8])+":"+str(date[2:4])+":"+str(date[0:2]) +
" "+str(date[9:11])+":"+str(date[11:13])+":00"),
"tags": tags,
}
content = ''.join([str(i) for i in content]).strip()
content += "\n<p>"+meta["tags"]+"</p>"
payload = {"body": content,
"title": meta["title"],
"created": meta["date"]}
known_api(API_USERNAME, API_KEY, "entry", payload)

View File

@ -1,9 +1,22 @@
#!/usr/bin/python #!/usr/bin/env python3
# TODO : What happens when a file is moved with git ? # Blogit script written by Phyks (Lucas Verney) for his personnal use. I
# TODO : Test the whole thing # distribute it with absolutely no warranty, except that it works for me on my
# TODO : What happens when I run it as a hook ? # blog :)
# TODO : What happens when I commit with -a option ?
# This script is a pre-commit hook that should be placed in your .git/hooks
# folder to work. Read README file for more info.
# LICENSE :
# -----------------------------------------------------------------------------
# "THE NO-ALCOHOL BEER-WARE LICENSE" (Revision 42):
# Phyks (webmaster@phyks.me) wrote this file. As long as you retain this notice
# you can do whatever you want with this stuff (and you can also do whatever
# you want with this stuff without retaining it, but that's not cool...). If
# we meet some day, and you think this stuff is worth it, you can buy me a
# <del>beer</del> soda in return.
# Phyks
# ----------------------------------------------------------------------------
import sys import sys
import getopt import getopt
@ -12,18 +25,176 @@ import os
import datetime import datetime
import subprocess import subprocess
import re import re
import locale
import markdown
from email import utils
from hashlib import md5
from functools import cmp_to_key
from time import gmtime, strftime, mktime from time import gmtime, strftime, mktime
from bs4 import BeautifulSoup
# ========================
# Github Flavored Markdown
# ========================
def gfm(text):
# Extract pre blocks.
extractions = {}
def pre_extraction_callback(matchobj):
digest = md5(matchobj.group(0).encode("utf-8")).hexdigest()
extractions[digest] = matchobj.group(0)
return "{gfm-extraction-%s}" % digest
pattern = re.compile(r'<pre>.*?</pre>', re.MULTILINE | re.DOTALL)
text = re.sub(pattern, pre_extraction_callback, text)
# Prevent foo_bar_baz from ending up with an italic word in the middle.
def italic_callback(matchobj):
s = matchobj.group(0)
if list(s).count('_') >= 2:
return s.replace('_', '\_')
return s
text = re.sub(r'^(?! {4}|\t)\w+_\w+_\w[\w_]*', italic_callback, text)
# In very clear cases, let newlines become <br /> tags.
def newline_callback(matchobj):
if len(matchobj.group(1)) == 1:
return matchobj.group(0).rstrip() + ' \n'
else:
return matchobj.group(0)
pattern = re.compile(r'^[\w\<][^\n]*(\n+)', re.MULTILINE)
text = re.sub(pattern, newline_callback, text)
# Insert pre block extractions.
def pre_insert_callback(matchobj):
return '\n\n' + extractions[matchobj.group(1)]
text = re.sub(r'{gfm-extraction-([0-9a-f]{32})\}', pre_insert_callback,
text)
def handle_typography(text):
""" Add non breakable spaces before double punctuation signs."""
text = text.replace(' :', '&nbsp;:')
text = text.replace(' ;', '&nbsp;;')
text = text.replace(' !', '&nbsp;!')
text = text.replace(' ?', '&nbsp;?')
text = text.replace(' /', '&nbsp;/')
return text
return handle_typography(text)
# Test suite.
try:
from nose.tools import assert_equal
except ImportError:
def assert_equal(a, b):
assert a == b, '%r != %r' % (a, b)
def test_single_underscores():
"""Don't touch single underscores inside words."""
assert_equal(
gfm('foo_bar'),
'foo_bar',
)
def test_underscores_code_blocks():
"""Don't touch underscores in code blocks."""
assert_equal(
gfm(' foo_bar_baz'),
' foo_bar_baz',
)
def test_underscores_pre_blocks():
"""Don't touch underscores in pre blocks."""
assert_equal(
gfm('<pre>\nfoo_bar_baz\n</pre>'),
'\n\n<pre>\nfoo_bar_baz\n</pre>',
)
def test_pre_block_pre_text():
"""Don't treat pre blocks with pre-text differently."""
a = '\n\n<pre>\nthis is `a\\_test` and this\\_too\n</pre>'
b = 'hmm<pre>\nthis is `a\\_test` and this\\_too\n</pre>'
assert_equal(
gfm(a)[2:],
gfm(b)[3:],
)
def test_two_underscores():
"""Escape two or more underscores inside words."""
assert_equal(
gfm('foo_bar_baz'),
'foo\\_bar\\_baz',
)
def test_newlines_simple():
"""Turn newlines into br tags in simple cases."""
assert_equal(
gfm('foo\nbar'),
'foo \nbar',
)
def test_newlines_group():
"""Convert newlines in all groups."""
assert_equal(
gfm('apple\npear\norange\n\nruby\npython\nerlang'),
'apple \npear \norange\n\nruby \npython \nerlang',
)
def test_newlines_long_group():
"""Convert newlines in even long groups."""
assert_equal(
gfm('apple\npear\norange\nbanana\n\nruby\npython\nerlang'),
'apple \npear \norange \nbanana\n\nruby \npython \nerlang',
)
def test_newlines_list():
"""Don't convert newlines in lists."""
assert_equal(
gfm('# foo\n# bar'),
'# foo\n# bar',
)
assert_equal(
gfm('* foo\n* bar'),
'* foo\n* bar',
)
# =========
# Functions
# =========
# Test if a variable exists (== isset function in PHP) # Test if a variable exists (== isset function in PHP)
# ====================================================
def isset(variable): def isset(variable):
return variable in locals() or variable in globals() return variable in locals() or variable in globals()
# Test wether a variable is an int or not
# =======================================
def isint(variable):
try:
int(variable)
return True
except ValueError:
return False
# List all files in path directory # List all files in path directory
# Works recursively # Works recursively
# Return files list with path relative to current dir # Return files list with path relative to current dir
# ===================================================
def list_directory(path): def list_directory(path):
fichier = [] fichier = []
for root, dirs, files in os.walk(path): for root, dirs, files in os.walk(path):
@ -32,8 +203,11 @@ def list_directory(path):
return fichier return fichier
# Return a list with the tags of a given article (fh) # Return a list with the tags of a given article
def get_tags(fh): # ==============================================
def get_tags(filename):
try:
with open(filename, 'r') as fh:
tag_line = '' tag_line = ''
for line in fh.readlines(): for line in fh.readlines():
if "@tags=" in line: if "@tags=" in line:
@ -45,37 +219,46 @@ def get_tags(fh):
tags = [x.strip() for x in line[line.find("@tags=")+6:].split(",")] tags = [x.strip() for x in line[line.find("@tags=")+6:].split(",")]
return tags return tags
except IOError:
sys.exit("[ERROR] Unable to open file "+filename+".")
# Return the number latest articles in dir directory #Return date of an article
# ========================
def get_date(filename):
try:
with open(filename, 'r') as fh:
for line in fh.readlines():
if "@date=" in line:
return line[line.find("@date=")+6:].strip()
sys.exit("[ERROR] Unable to determine date in article "+filename+".")
except IOError:
sys.exit("[ERROR] Unable to open file "+filename+".")
# Return the _number_ latest articles in _dir_ directory
# ======================================================
def latest_articles(directory, number): def latest_articles(directory, number):
now = datetime.datetime.now() try:
counter = 0 latest_articles = subprocess.check_output(["git",
latest_articles = [] "ls-files",
directory],
for i in range(int(now.strftime('%Y')), 0, -1): universal_newlines=True)
if counter >= number: except:
break sys.exit("[ERROR] An error occurred when fetching file changes "
"from git.")
if os.path.isdir(directory+"/"+str(i)): latest_articles = latest_articles.strip().split("\n")
for j in range(12, 0, -1): latest_articles = [x for x in latest_articles if(isint(x[4:8]) and
if j < 10: (x.endswith(".html") or
j = "0"+str(j) x.endswith(".md")))]
latest_articles.sort(key=lambda x: (get_date(x)[4:8], get_date(x)[2:4],
if os.path.isdir(directory+"/"+str(i)+"/"+str(j)): get_date(x)[:2], get_date(x)[9:]),
articles_list = list_directory(directory+str(i)+"/"+str(j)) reverse=True)
# Sort by date the articles return latest_articles[:number]
articles_list.sort(key=lambda x: os.stat(x).st_mtime)
latest_articles += articles_list[:number-counter]
if len(latest_articles) < number-counter:
counter += len(articles_list)
else:
counter = number
return latest_articles
# Auto create necessary directories to write a file # Auto create necessary directories to write a file
# =================================================
def auto_dir(path): def auto_dir(path):
directory = os.path.dirname(path) directory = os.path.dirname(path)
try: try:
@ -87,6 +270,7 @@ def auto_dir(path):
# Replace some user specific syntax tags (to repplace smileys for example) # Replace some user specific syntax tags (to repplace smileys for example)
# ========================================================================
def replace_tags(article, search_list, replace_list): def replace_tags(article, search_list, replace_list):
return_string = article return_string = article
for search, replace in zip(search_list, replace_list): for search, replace in zip(search_list, replace_list):
@ -94,6 +278,31 @@ def replace_tags(article, search_list, replace_list):
return return_string return return_string
# Return text in <div class="article"> for rss description
# ========================================================
def get_text_rss(content):
soup = BeautifulSoup(content)
date = soup.find(attrs={'class': 'date'})
date.extract()
title = soup.find(attrs={'class': 'article_title'})
title.extract()
return str(soup.div)
def remove_tags(html):
return ''.join(BeautifulSoup(html).findAll(text=True))
def truncate(text, length=100):
return text[:text.find('.', length) - 1] + ""
# Set locale
locale.setlocale(locale.LC_ALL, '')
# ========================
# Start of the main script
# ========================
try: try:
opts, args = getopt.gnu_getopt(sys.argv, "hf", ["help", "force-regen"]) opts, args = getopt.gnu_getopt(sys.argv, "hf", ["help", "force-regen"])
except getopt.GetoptError: except getopt.GetoptError:
@ -119,6 +328,8 @@ for opt, arg in opts:
# Set parameters with params file # Set parameters with params file
search_list = [] search_list = []
replace_list = [] replace_list = []
months = ["Janvier", "Février", "Mars", "Avril", "Mai", "Juin", "Juillet",
"Août", "Septembre", "Octobre", "Novembre", "Décembre"]
try: try:
with open("raw/params", "r") as params_fh: with open("raw/params", "r") as params_fh:
params = {} params = {}
@ -126,10 +337,18 @@ try:
if line.strip() == "" or line.strip().startswith("#"): if line.strip() == "" or line.strip().startswith("#"):
continue continue
option, value = line.split("=", 1) option, value = line.split("=", 1)
option = option.strip()
if option == "SEARCH": if option == "SEARCH":
search_list = value.strip().split(",") search_list = [i.strip() for i in value.split(",")]
elif option == "REPLACE": elif option == "REPLACE":
replace_list = value.strip().split(",") replace_list = [i.strip() for i in value.split(",")]
elif option == "MONTHS":
months = [i.strip() for i in value.split(",")]
elif option == "IGNORE_FILES":
params["IGNORE_FILES"] = [i.strip() for i in value.split(",")]
elif option == "BLOG_URL":
params["BLOG_URL"] = value.strip(" \n\t\r").rstrip("/")
else: else:
params[option.strip()] = value.strip() params[option.strip()] = value.strip()
@ -139,6 +358,7 @@ except IOError:
"parameters. Does such a file exist ? See doc for more info " "parameters. Does such a file exist ? See doc for more info "
"on this file.") "on this file.")
print("[INFO] Blog url is "+params["BLOG_URL"]+".")
# Fill lists for modified, deleted and added files # Fill lists for modified, deleted and added files
modified_files = [] modified_files = []
@ -176,8 +396,14 @@ if not force_regen:
else: else:
sys.exit("[ERROR] An error occurred when running git diff.") sys.exit("[ERROR] An error occurred when running git diff.")
else: else:
try:
shutil.rmtree("blog/") shutil.rmtree("blog/")
except FileNotFoundError:
pass
try:
shutil.rmtree("gen/") shutil.rmtree("gen/")
except FileNotFoundError:
pass
added_files = list_directory("raw") added_files = list_directory("raw")
if not added_files and not modified_files and not deleted_files: if not added_files and not modified_files and not deleted_files:
@ -188,23 +414,27 @@ if not added_files and not modified_files and not deleted_files:
for filename in list(added_files): for filename in list(added_files):
direct_copy = False direct_copy = False
if not filename.startswith("raw/"): if (not filename.startswith("raw/") or filename.endswith("~") or
filename in params["IGNORE_FILES"]):
added_files.remove(filename) added_files.remove(filename)
continue continue
try: try:
int(filename[4:8]) int(filename[4:8])
if filename[4:8] not in years_list:
years_list.append(filename[4:8]) years_list.append(filename[4:8])
except ValueError: except ValueError:
direct_copy = True direct_copy = True
try: try:
int(filename[8:10]) int(filename[9:11])
months_list.append(filename[8:10]) if filename[9:11] not in months_list:
months_list.append(filename[9:11])
except ValueError: except ValueError:
pass pass
if ((not filename.endswith(".html") and not filename.endswith(".ignore")) if ((not filename.endswith(".html") and not filename.endswith(".ignore")
and not filename.endswith(".md"))
or direct_copy): or direct_copy):
# Note : this deal with CSS, images or footer file # Note : this deal with CSS, images or footer file
print("[INFO] (Direct copy) Copying directly the file " print("[INFO] (Direct copy) Copying directly the file "
@ -224,22 +454,26 @@ for filename in list(added_files):
for filename in list(modified_files): for filename in list(modified_files):
direct_copy = False direct_copy = False
if not filename.startswith("raw/"): if (not filename.startswith("raw/") or filename.endswith("~")
or filename in params["IGNORE_FILES"]):
modified_files.remove(filename) modified_files.remove(filename)
continue continue
try: try:
int(filename[4:8]) int(filename[4:8])
if filename[4:8] not in years_list:
years_list.append(filename[4:8]) years_list.append(filename[4:8])
except ValueError: except ValueError:
direct_copy = True direct_copy = True
try: try:
int(filename[8:10]) int(filename[9:11])
months_list.append(filename[8:10]) if filename[9:11] not in months_list:
months_list.append(filename[9:11])
except ValueError: except ValueError:
pass pass
if ((not filename.endswith("html") and not filename.endswith("ignore")) if ((not filename.endswith(".html") and not filename.endswith(".ignore")
and not filename.endswith(".md"))
or direct_copy): or direct_copy):
print("[INFO] (Direct copy) Updating directly the file " print("[INFO] (Direct copy) Updating directly the file "
+ filename[4:]+" in blog dir.") + filename[4:]+" in blog dir.")
@ -248,7 +482,7 @@ for filename in list(modified_files):
modified_files.remove(filename) modified_files.remove(filename)
continue continue
if filename.endswith("ignore"): if filename.endswith(".ignore"):
print("[INFO] (Not published) Found not published article " print("[INFO] (Not published) Found not published article "
+ filename[4:-7]+".") + filename[4:-7]+".")
added_files.remove(filename) added_files.remove(filename)
@ -257,27 +491,35 @@ for filename in list(modified_files):
for filename in list(deleted_files): for filename in list(deleted_files):
direct_copy = False direct_copy = False
if not filename.startswith("raw/"): if (not filename.startswith("raw/") or filename.endswith("~") or
filename in params["IGNORE_FILES"]):
deleted_files.remove(filename) deleted_files.remove(filename)
continue continue
try: try:
int(filename[4:8]) int(filename[4:8])
if filename[4:8] not in years_list:
years_list.append(filename[4:8]) years_list.append(filename[4:8])
except ValueError: except ValueError:
direct_delete = True direct_delete = True
try: try:
int(filename[8:10]) int(filename[9:11])
months_list.append(filename[8:10]) if filename[9:11] not in months_list:
months_list.append(filename[9:11])
except ValueError: except ValueError:
pass pass
if ((not filename.endswith("html") and not filename.endswith("ignore")) if ((not filename.endswith(".html") and not filename.endswith(".ignore")
or direct_delete): and not filename.endswith(".md"))
or (isset("direct_delete") and direct_delete is True)):
print("[INFO] (Deleted file) Delete directly copied file " print("[INFO] (Deleted file) Delete directly copied file "
+ filename[4:]+" in blog dir.") + filename[4:]+" in blog dir.")
try:
os.unlink(filename) os.unlink(filename)
except FileNotFoundError:
pass
os.system('git rm '+filename)
deleted_files.remove(filename) deleted_files.remove(filename)
continue continue
@ -287,11 +529,7 @@ print("[INFO] Deleted filed : "+", ".join(deleted_files))
print("[INFO] Updating tags for added and modified files.") print("[INFO] Updating tags for added and modified files.")
for filename in added_files: for filename in added_files:
try: tags = get_tags(filename)
with open(filename, 'r') as fh:
tags = get_tags(fh)
except IOError:
sys.exit("[ERROR] Unable to open file "+filename+".")
if not tags: if not tags:
sys.exit("[ERROR] (TAGS) In added article "+filename[4:]+" : " sys.exit("[ERROR] (TAGS) In added article "+filename[4:]+" : "
@ -311,8 +549,7 @@ for filename in added_files:
for filename in modified_files: for filename in modified_files:
try: try:
with open(filename, 'r') as fh: tags = get_tags(filename)
tags = get_tags(fh)
except IOError: except IOError:
sys.exit("[ERROR] Unable to open file "+filename[4:]+".") sys.exit("[ERROR] Unable to open file "+filename[4:]+".")
@ -330,7 +567,10 @@ for filename in modified_files:
print("[INFO] (TAGS) Found new tag " print("[INFO] (TAGS) Found new tag "
+ tag[:tag.index(".tmp")]+" for modified article " + tag[:tag.index(".tmp")]+" for modified article "
+ filename[4:]+".") + filename[4:]+".")
tags.remove(tag_file[9:]) try:
tags.remove(tag[9:])
except ValueError:
pass
if (tag[tag.index("tags/") + 5:tag.index(".tmp")] not in tags if (tag[tag.index("tags/") + 5:tag.index(".tmp")] not in tags
and filename[4:] in tag_file.read()): and filename[4:] in tag_file.read()):
tag_old = tag_file.read() tag_old = tag_file.read()
@ -343,12 +583,7 @@ for filename in modified_files:
print("[INFO] (TAGS) Deleted tag " + print("[INFO] (TAGS) Deleted tag " +
tag[:tag.index(".tmp")]+" in modified article " + tag[:tag.index(".tmp")]+" in modified article " +
filename[4:]+".") filename[4:]+".")
tags.remove(tag_file[9:]) else:
except IOError:
sys.exit("[ERROR] (TAGS) An error occurred when parsing tags "
" of article "+filename[4:]+".")
if not tag_file_write:
try: try:
os.unlink(tag) os.unlink(tag)
print("[INFO] (TAGS) No more article with tag " + print("[INFO] (TAGS) No more article with tag " +
@ -357,12 +592,24 @@ for filename in modified_files:
print("[INFO] (TAGS) "+tag+" was found to be empty" print("[INFO] (TAGS) "+tag+" was found to be empty"
" but there was an error during deletion. " " but there was an error during deletion. "
"You should check manually.") "You should check manually.")
os.system('git rm '+tag)
for tag in tags: # New tags created print(tags)
try:
tags.remove(tag[9:])
except ValueError:
pass
except IOError:
sys.exit("[ERROR] (TAGS) An error occurred when parsing tags "
" of article "+filename[4:]+".")
# New tags created
for tag in [x for x in tags if "gen/tags/"+x+".tmp"
not in list_directory("gen/tags")]:
try: try:
auto_dir("gen/tags/"+tag+".tmp") auto_dir("gen/tags/"+tag+".tmp")
with open("gen/tags/"+tag+".tmp", "a+") as tag_file: with open("gen/tags/"+tag+".tmp", "a+") as tag_file:
# Delete tag file here if empty after deletion
tag_file.write(filename[4:]+"\n") tag_file.write(filename[4:]+"\n")
print("[INFO] (TAGS) Found new tag "+tag+" for " print("[INFO] (TAGS) Found new tag "+tag+" for "
"modified article "+filename[4:]+".") "modified article "+filename[4:]+".")
@ -372,11 +619,7 @@ for filename in modified_files:
# Delete tags for deleted files and delete all generated files # Delete tags for deleted files and delete all generated files
for filename in deleted_files: for filename in deleted_files:
try: tags = os.listdir("gen/tags/")
with open(filename, 'r') as fh:
tags = get_tags(fh)
except IOError:
sys.exit("[ERROR] Unable to open file "+filename+".")
if not tags: if not tags:
sys.exit("[ERROR] In deleted article "+filename[4:]+" : " sys.exit("[ERROR] In deleted article "+filename[4:]+" : "
@ -384,7 +627,7 @@ for filename in deleted_files:
for tag in tags: for tag in tags:
try: try:
with open("gen/tags/"+tag+".tmp", 'r+') as tag_file: with open("gen/tags/"+tag, 'r+') as tag_file:
tag_old = tag_file.read() tag_old = tag_file.read()
tag_file.truncate() tag_file.truncate()
# Delete file in tag # Delete file in tag
@ -408,15 +651,18 @@ for filename in deleted_files:
print("[INFO] (TAGS) "+tag+" was found to be empty " print("[INFO] (TAGS) "+tag+" was found to be empty "
"but there was an error during deletion. " "but there was an error during deletion. "
"You should check manually.") "You should check manually.")
os.system('git rm '+filename)
# Delete generated files # Delete generated files
try: try:
os.unlink("gen/"+filename[4:-5]+".gen") os.unlink("gen/"+filename[4:filename.rfind('.')]+".gen")
os.unlink("blog/"+filename[4:]) os.unlink("blog/"+filename[4:])
except FileNotFoundError: except FileNotFoundError:
print("[INFO] (DELETION) Article "+filename[4:]+" seems " print("[INFO] (DELETION) Article "+filename[4:]+" seems "
"to not have already been generated. " "to not have already been generated. "
"You should check manually.") "You should check manually.")
os.system("git rm gen/"+filename[4:filename.rfind('.')]+".gen")
os.system("git rm blog/"+filename[4:])
print("[INFO] (DELETION) Deleted article "+filename[4:] + print("[INFO] (DELETION) Deleted article "+filename[4:] +
" in both gen and blog directories") " in both gen and blog directories")
@ -426,11 +672,11 @@ for filename in deleted_files:
last_articles = latest_articles("raw/", int(params["NB_ARTICLES_INDEX"])) last_articles = latest_articles("raw/", int(params["NB_ARTICLES_INDEX"]))
tags_full_list = list_directory("gen/tags") tags_full_list = list_directory("gen/tags")
# Generate html for each article # Generate html for each article (gen/ dir)
for filename in added_files+modified_files: for filename in added_files+modified_files:
try: try:
with open(filename, 'r') as fh: with open(filename, 'r') as fh:
article = "", "", "", "", "" article, title, date, author, tags = "", "", "", "", ""
for line in fh.readlines(): for line in fh.readlines():
article += line article += line
if "@title=" in line: if "@title=" in line:
@ -455,32 +701,65 @@ for filename in added_files+modified_files:
date_readable = ("Le "+date[0:2]+"/"+date[2:4]+"/"+date[4:8] + date_readable = ("Le "+date[0:2]+"/"+date[2:4]+"/"+date[4:8] +
" à "+date[9:11]+":"+date[11:13]) " à "+date[9:11]+":"+date[11:13])
day_aside = date[0:2]
month_aside = months[int(date[2:4]) - 1]
tags_comma = ""
tags = [i.strip() for i in tags.split(",")]
for tag in tags:
if tags_comma != "":
tags_comma += ", "
tags_comma += ("<a href=\""+params["BLOG_URL"] +
"/tags/"+tag+".html\">"+tag+"</a>")
# Markdown support
if filename.endswith(".md"):
article = markdown.markdown(gfm(article))
# Write generated HTML for this article in gen / # Write generated HTML for this article in gen /
article = replace_tags(article, search_list, replace_list) article = replace_tags(article, search_list, replace_list)
# Handle @article_path
article_path = params["BLOG_URL"] + "/" + date[4:8] + "/" + date[2:4]
article = article.replace("@article_path", article_path)
try: try:
auto_dir("gen/"+filename[4:-5]+".gen") auto_dir("gen/"+filename[4:filename.rfind('.')]+".gen")
with open("gen/"+filename[4:-5]+".gen", 'w') as article_file: with open("gen/"+filename[4:filename.rfind('.')]+".gen", 'w') as article_file:
article_file.write("<article>\n" article_file.write("<article>\n"
"\t<nav class=\"aside_article\"></nav>\n" "\t<aside>\n"
"\t\t<p class=\"day\">"+day_aside+"</p>\n"
"\t\t<p class=\"month\">"+month_aside+"</p>\n"
"\t</aside>\n"
"\t<div class=\"article\">\n" "\t<div class=\"article\">\n"
"\t\t<h1>"+title+"</h1>\n" "\t\t<header><h1 class=\"article_title\"><a " +
"href=\""+params["BLOG_URL"]+"/"+filename[4:filename.rfind('.')]+'.html' +
"\">"+title+"</a></h1></header>\n"
"\t\t"+article+"\n" "\t\t"+article+"\n"
"\t\t<p class=\"date\">"+date+"</p>\n" "\t\t<footer><p class=\"date\">"+date_readable +
"\t</div>\n") "</p>\n"
"\t\t<p class=\"tags\">Tags : "+tags_comma +
"</p></footer>\n"
"\t</div>\n"
"</article>\n")
print("[INFO] (GEN ARTICLES) Article "+filename[4:]+" generated") print("[INFO] (GEN ARTICLES) Article "+filename[4:]+" generated")
except IOError: except IOError:
sys.exit("[ERROR] An error occurred when writing generated HTML for " sys.exit("[ERROR] An error occurred when writing generated HTML for "
"article "+filename[4:]+".") "article "+filename[4:]+".")
# Starting to generate header file (except title) # Starting to generate header file (except title)
tags_header = "<ul>" tags_header = ""
for tag in tags_full_list: for tag in sorted(tags_full_list, key=cmp_to_key(locale.strcoll)):
tags_header += "<li>" with open("gen/tags/"+tag[9:-4]+".tmp", "r") as tag_fh:
tags_header += ("<a href=\""+params["BLOG_URL"]+tag[4:-4]+".html\">" + nb = len(tag_fh.readlines())
tag[9:-4]+"</a>")
tags_header += "</li>" tags_header += "<div class=\"tag\">"
tags_header += "</ul>" tags_header += ("<a href=\""+params["BLOG_URL"] +
"/tags/"+tag[9:-4]+".html\">")
tags_header += ("/"+tag[9:-4]+" ("+str(nb)+")")
tags_header += ("</a> ")
tags_header += "</div>"
try: try:
with open("raw/header.html", "r") as header_fh: with open("raw/header.html", "r") as header_fh:
header = header_fh.read() header = header_fh.read()
@ -488,29 +767,36 @@ except IOError:
sys.exit("[ERROR] Unable to open raw/header.html file.") sys.exit("[ERROR] Unable to open raw/header.html file.")
header = header.replace("@tags", tags_header, 1) header = header.replace("@tags", tags_header, 1)
header = header.replace("@blog_url", params["BLOG_URL"], 1) header = header.replace("@blog_url", params["BLOG_URL"])
articles_header = "<ul>" articles_header = ""
articles_index = "<ul>" articles_index = ""
rss = ("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" rss = ("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n")
"<rss version=\"2.0\" xmlns:atom=\"http://www.w3.org/2005/Atom\" "
if os.path.isfile("raw/rss.css"):
rss += ("<?xml-stylesheet type=\"text/css\" " +
"href=\""+params["PROTOCOL"]+params["BLOG_URL"]+"/rss.css\"?>\n")
rss += ("<rss version=\"2.0\" xmlns:atom=\"http://www.w3.org/2005/Atom\" "
"xmlns:content=\"http://purl.org/rss/1.0/modules/content/\">\n") "xmlns:content=\"http://purl.org/rss/1.0/modules/content/\">\n")
rss += ("\t<channel>" rss += ("\t<channel>"
"\t\t<atom:link href=\""+params["BLOG_URL"]+"rss.xml\" " "\t\t<atom:link href=\""+params["PROTOCOL"]+params["BLOG_URL"] +
"rel=\"self\" type=\"application/rss+xml\"/>\n" "/rss.xml\" rel=\"self\" type=\"application/rss+xml\"/>\n"
"\t\t<title>"+params["BLOG_TITLE"]+"</title>\n" "\t\t<title>"+params["BLOG_TITLE"]+"</title>\n"
"\t\t<link>"+params["BLOG_URL"]+"</link>\n" "\t\t<link>"+params["PROTOCOL"] + params["BLOG_URL"]+"</link>\n"
"\t\t<description>"+params["DESCRIPTION"]+"</description>\n" "\t\t<description>"+params["DESCRIPTION"]+"</description>\n"
"\t\t<language>"+params["LANGUAGE"]+"</language>\n" "\t\t<language>"+params["LANGUAGE"]+"</language>\n"
"\t\t<copyright>"+params["COPYRIGHT"]+"</copyright>\n" "\t\t<copyright>"+params["COPYRIGHT"]+"</copyright>\n"
"\t\t<webMaster>"+params["WEBMASTER"]+"</webMaster>\n" "\t\t<webMaster>"+params["WEBMASTER"]+"</webMaster>\n"
"\t\t<lastBuildDate>" + "\t\t<lastBuildDate>" +
strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime())+"</lastBuildDate>\n") utils.formatdate(mktime(gmtime()))+"</lastBuildDate>\n")
# Generate header (except title) + index file + rss file # Generate header (except title) + index file + rss file
for i, article in enumerate(last_articles): for i, article in enumerate(["gen/"+x[4:x.rfind('.')]+".gen" for x in last_articles]):
content, title, tags, date, author = "", "", "", "", "" content, title, tags, date, author = "", "", "", "", ""
content_desc = ""
try: try:
with open(article, "r") as fh: with open(article, "r") as fh:
for line in fh.readlines(): for line in fh.readlines():
@ -527,6 +813,7 @@ for i, article in enumerate(last_articles):
if "@tags=" in line: if "@tags=" in line:
tags = line[line.find("@tags=")+6:].strip() tags = line[line.find("@tags=")+6:].strip()
continue continue
content_desc += line
except IOError: except IOError:
sys.exit("[ERROR] Unable to open "+article+" file.") sys.exit("[ERROR] Unable to open "+article+" file.")
@ -535,36 +822,44 @@ for i, article in enumerate(last_articles):
if i < 5: if i < 5:
articles_header += "<li>" articles_header += "<li>"
articles_header += ("<a href=\""+params["BLOG_URL"] + articles_header += ("<a href=\""+params["BLOG_URL"] + "/" +
article[4:-4]+".html\">"+title+"</a>") article[4:-4]+".html\">"+title+"</a>")
articles_header += "</li>" articles_header += "</li>"
articles_index += "<li>" articles_index += content
articles_index += ("<a href=\""+params["BLOG_URL"] + date_rss = utils.formatdate(mktime(gmtime(mktime(datetime.
article[4:-4]+".html\">"+title+"</a>") datetime.
articles_index += "</li>" strptime(date,
date_rss = strftime("%a, %d %b %Y %H:%M:%S +0000",
gmtime(mktime(datetime.datetime.strptime(date,
"%d%m%Y-%H%M") "%d%m%Y-%H%M")
.timetuple()))) .timetuple()))))
rss += ("\t\t<item>\n" rss += ("\t\t<item>\n" +
"\t\t\t<title>"+title+"</title>\n" "\t\t\t<title>"+remove_tags(title)+"</title>\n" +
"\t\t\t<link>"+params["BLOG_URL"]+article[5:]+"</link>\n" "\t\t\t<link>"+params["PROTOCOL"]+params["BLOG_URL"]+"/" +
"\t\t\t<guid isPermaLink=\"false\">" + article[4:-4]+".html</link>\n" +
params["BLOG_URL"]+article[5:]+"</guid>\n" "\t\t\t<guid isPermaLink=\"true\">" +
"\t\t\t<description><![CDATA[" + params["PROTOCOL"] + params["BLOG_URL"]+"/"+article[4:-4]+".html</guid>\n"
replace_tags(article, search_list, replace_list) + # Apply remove_tags twice to also remove tags in @title and so
"]]></description>\n" "\t\t\t<description>" + truncate(remove_tags(remove_tags(replace_tags(get_text_rss(content_desc),
"\t\t\t<pubDate>"+date_rss+"</pubDate>\n" search_list,
"\t\t\t<category>"+', '.join(tags)+"</category>\n" replace_list)))) +
"\t\t\t<author>"+params["WEBMASTER"]+"</author>\n" "</description>\n" +
"\t\t\t<content:encoded><![CDATA[" +
replace_tags(get_text_rss(content),
search_list,
replace_list).replace('"'+params['BLOG_URL'],
'"'+params['BLOG_URL_RSS']) +
"]]></content:encoded>\n" +
"\t\t\t<pubDate>"+date_rss+"</pubDate>\n" +
("\n".join(["\t\t\t<category>" + i.strip() + "</category>"
for i in tags.split(",")]))+"\n" +
"\t\t\t<author>"+params["WEBMASTER"]+"</author>\n" +
"\t\t</item>\n") "\t\t</item>\n")
# Finishing header gen # Finishing header gen
articles_header += "</ul>" articles_header += ("<li><a "+"href=\""+params["BLOG_URL"] +
"/archives.html\">"+"Archives</a></li>")
header = header.replace("@articles", articles_header, 1) header = header.replace("@articles", articles_header, 1)
try: try:
@ -581,14 +876,15 @@ except IOError:
try: try:
with open("raw/footer.html", "r") as footer_fh: with open("raw/footer.html", "r") as footer_fh:
footer = footer_fh.read() footer = footer_fh.read()
footer = footer.replace("@blog_url", params["BLOG_URL"])
except IOError: except IOError:
sys.exit("[ERROR] An error occurred while parsing footer " sys.exit("[ERROR] An error occurred while parsing footer "
"file raw/footer.html.") "file raw/footer.html.")
# Finishing index gen # Finishing index gen
articles_index += "</ul>"
index = (header.replace("@title", params["BLOG_TITLE"], 1) + index = (header.replace("@title", params["BLOG_TITLE"], 1) +
articles_index + footer) articles_index + "<p class=\"archives\"><a "+"href=\"" +
params["BLOG_URL"]+"/archives.html\">Archives</a></p>"+footer)
try: try:
with open("blog/index.html", "w") as index_fh: with open("blog/index.html", "w") as index_fh:
@ -613,10 +909,16 @@ for tag in tags_full_list:
tag_content = header.replace("@title", params["BLOG_TITLE"] + tag_content = header.replace("@title", params["BLOG_TITLE"] +
" - "+tag[4:-4], 1) " - "+tag[4:-4], 1)
# Sort by date
with open(tag, "r") as tag_gen_fh: with open(tag, "r") as tag_gen_fh:
for line in tag_gen_fh.readlines(): articles_list = ["gen/"+line.replace(".html", ".gen").replace('.md', '.gen').strip() for line
line = line.replace(".html", ".gen") in tag_gen_fh.readlines()]
with open("gen/"+line.strip(), "r") as article_fh: articles_list.sort(key=lambda x: (get_date(x)[4:8], get_date(x)[2:4],
get_date(x)[:2], get_date(x)[9:]),
reverse=True)
for article in articles_list:
with open(article.strip(), "r") as article_fh:
tag_content += article_fh.read() tag_content += article_fh.read()
tag_content += footer tag_content += footer
@ -630,37 +932,32 @@ for tag in tags_full_list:
sys.exit("[ERROR] An error occurred while generating tag page \"" + sys.exit("[ERROR] An error occurred while generating tag page \"" +
tag[9:-4]+"\"") tag[9:-4]+"\"")
# Finish articles pages generation # Finish generating HTML for articles (blog/ dir)
for filename in added_files+modified_files: for article in added_files+modified_files:
try: try:
auto_dir("blog/"+filename[4:]) with open("gen/"+article[4:article.rfind('.')]+".gen", "r") as article_fh:
with open("blog/"+filename[4:], "w") as article_fh: content = article_fh.read()
with open("gen/header.gen", "r") as header_gen_fh:
article = header_gen_fh.read()
with open("gen/"+filename[4:-5]+".gen", "r") as article_gen_fh:
line = article_gen_fh.readline()
while "@title" not in line:
line = article_gen_fh.readline()
line = line.strip()
title_pos = line.find("@title=")
title = line[title_pos+7:]
article_gen_fh.seek(0)
article = article.replace("@title", params["BLOG_TITLE"] +
" - "+title, 1)
article += replace_tags(article_gen_fh.read(),
search_list,
replace_list)
with open("gen/footer.gen", "r") as footer_gen_fh:
article += footer_gen_fh.read()
article_fh.write(article)
print("[INFO] (ARTICLES) Article page for "+filename[4:] +
" has been generated successfully.")
except IOError: except IOError:
sys.exit("[ERROR] An error occurred while generating article " + sys.exit("[ERROR] An error occurred while opening"
filename[4:]+" page.") "gen/"+article[4:article.rfind('.')]+".gen file.")
# Regenerate page for years / months for line in content.split("\n"):
if "@title=" in line:
title = line[line.find("@title=")+7:].strip()
break
content = header.replace("@title", params["BLOG_TITLE"] + " - " +
title, 1) + content + footer
try:
auto_dir("blog/"+article[4:article.rfind('.')]+'.html')
with open("blog/"+article[4:article.rfind('.')]+'.html', "w") as article_fh:
article_fh.write(content)
print("[INFO] (GEN ARTICLES) HTML file generated in blog dir for "
"article "+article[4:article.rfind('.')]+'.html'+".")
except IOError:
sys.exit("[ERROR] Unable to write blog/"+article[4:article.rfind('.')]+'.html'+" file.")
# Regenerate pages for years / months
years_list.sort(reverse=True) years_list.sort(reverse=True)
for i in years_list: for i in years_list:
try: try:
@ -668,7 +965,7 @@ for i in years_list:
except ValueError: except ValueError:
continue continue
# Generate page per year # Generate pages per year
page_year = header.replace("@title", params["BLOG_TITLE"]+" - "+i, 1) page_year = header.replace("@title", params["BLOG_TITLE"]+" - "+i, 1)
months_list.sort(reverse=True) months_list.sort(reverse=True)
@ -681,7 +978,10 @@ for i in years_list:
params["BLOG_TITLE"]+" - "+i+"/"+j, 1) params["BLOG_TITLE"]+" - "+i+"/"+j, 1)
articles_list = list_directory("gen/"+i+"/"+j) articles_list = list_directory("gen/"+i+"/"+j)
articles_list.sort(key=lambda x: os.stat(x).st_mtime, reverse=True) articles_list.sort(key=lambda x: (get_date(x)[4:8], get_date(x)[2:4],
get_date(x)[:2], get_date(x)[9:]),
reverse=True)
for article in articles_list: for article in articles_list:
try: try:
with open(article, "r") as article_fh: with open(article, "r") as article_fh:
@ -708,3 +1008,72 @@ for i in years_list:
page_year_fh.write(page_year) page_year_fh.write(page_year)
except IOError: except IOError:
sys.exit("[ERROR] Unable to write index file for "+i+".") sys.exit("[ERROR] Unable to write index file for "+i+".")
# Generate archive page
archives = header.replace("@title", params["BLOG_TITLE"]+" - Archives", 1)
years_list = os.listdir("blog/")
years_list.sort(reverse=True)
archives += ("<article><div class=\"article\"><h1 " +
"class=\"article_title\">Archives</h1><ul>")
for i in years_list:
if not os.path.isdir("blog/"+i):
continue
try:
int(i)
except ValueError:
continue
archives += "<li><a href=\""+params["BLOG_URL"]+"/"+i+"\">"+i+"</a></li>"
archives += "<ul>"
months_list = os.listdir("blog/"+i)
months_list.sort(reverse=True)
for j in months_list:
if not os.path.isdir("blog/"+i+"/"+j):
continue
archives += ("<li><a href=\""+params["BLOG_URL"] + "/" + i +
"/"+j+"\">"+datetime.datetime.
strptime(j, "%m").strftime("%B").title()+"</a></li>")
archives += "</ul>"
archives += "</ul></div></article>"
archives += footer
try:
with open("blog/archives.html", "w") as archives_fh:
archives_fh.write(archives)
except IOError:
sys.exit("[ERROR] Unable to write blog/archives.html file.")
# Include header and footer for pages that need it
for i in os.listdir("blog/"):
if (os.path.isdir("blog/"+i) or i in ["header.html", "footer.html",
"rss.xml", "style.css", "index.html",
"archives.html", "humans.txt"]):
continue
if not i.endswith(".html"):
continue
with open("blog/"+i, 'r+') as fh:
content = fh.read()
fh.seek(0)
if content.find("#include_header_here") != -1:
content = content.replace("#include_header_here",
header.replace("@title",
(params["BLOG_TITLE"] +
" - "+i[:i.rfind('.')].title()),
1),
1)
fh.write(content)
fh.seek(0)
if content.find("#include_footer_here") != -1:
fh.write(content.replace("#include_footer_here", footer, 1))
os.system("git add --ignore-removal blog/ gen/")

View File

@ -4,7 +4,7 @@
@title=Un exemple d'article @title=Un exemple d'article
@tags=test @tags=test
--> -->
<p>Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus, neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi, tincidunt quis, accumsan porttitor, facilisis luctus, metus</p> <p>1Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus, neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi, tincidunt quis, accumsan porttitor, facilisis luctus, metus</p>
<p>Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus, neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi, tincidunt quis, accumsan porttitor, facilisis luctus, metus</p> <p>Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus, neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi, tincidunt quis, accumsan porttitor, facilisis luctus, metus</p>

29
raw/contact.html Normal file
View File

@ -0,0 +1,29 @@
#include_header_here
<article>
<aside class="aside_article">
<p class="month">Phyks</p>
</aside>
<div class="article">
<h1 class="article_title">Contact</h1>
<h2>E-mail</h2>
<p>[FR] Vous pouvez me contacter par e-mail à l'adresse suivante (pseudo@domaine.me) :</p>
<p>[EN] You can contact me using the following e-mail address (nick@domain.me) :</p>
<p class="center"><span class="contact_e-mail">@</span></p>
<h2>Jabber</h2>
<p>[FR] Vous pouvez également me joindre sur Jabber :</p>
<p>[EN] I'm also available very often on Jabber :</p>
<p class="center"><span class="contact_e-mail">@</span></p>
<h2>Divers</h2>
<ul>
<li>Mon <a href="https://github.com/phyks/">profil Github</a>.</li>
<li>[FR] Tous les codes que j'écris et les articles de ce blog sont sous licence <em>BEERWARE</em> (sauf mention contraire). Vous êtes libres de faire tout ce que vous voulez avec. Si vous souhaitez me soutenir, le meilleur moyen reste de liartager ces informations autour de vous (et de citer la source :). Vous liouvez également me liayer <del>une bière</del> un soda <em>via</em> Flattr ou tout autre moyen qui vous convient.</li>
<li>[EN] All my source codes and articles on my blog are under a <em>BEERWARE</em> license (except if anything special is specified). You are free to do whatever you want with them. If you want to support me, the best way is to share these pieces of information around you (and to cite the source :). You can also pay me a <del>beer</del> soda <em>via</em> Flattr or any mean you want.</li>
</ul>
</div>
</article>
#include_footer_here

291
raw/design.css Normal file
View File

@ -0,0 +1,291 @@
html, body {
margin: 0;
padding: 0;
background-color: rgb(35, 34, 34);
background-image: url('img/bg.png');
font-family: "DejaVu Sans", Verdana, "Bitstream Vera Sans", Geneva, sans-serif;
line-height: 1.5em;
text-align: justify;
}
/* General classes */
.monospace {
font-family: "Lucida Console", Monaco, monospace;
}
.center {
text-align: center;
}
.contact_e-mail:before {
unicode-bidi: bidi-override;
direction: rtl;
content: "em.skyhp";
}
.contact_e-mail:after {
unicode-bidi: bidi-override;
direction: rtl;
content: "skyhp";
}
/* Wrapper */
#wrapper {
padding-left: 17em;
transition: all 0.4s ease 0s;
}
/* Hide the header and display it only in responsive view */
#header {
display: none;
text-align: center;
width: 50%;
margin: auto;
font-size: 0.9em;
padding: 0.3em;
}
#header h1 {
font-weight: normal;
padding: 0;
margin: 0;
margin-top: 0.5em;
background-color: rgb(117, 170, 39);
background-image: url("img/sidebar.png");
border: 1px solid black;
border-radius: 0.2em;
padding: 0.6em;
}
#header a {
color: white;
text-decoration: none;
}
/* Sidebar */
#sidebar-wrapper {
margin-left: -16em;
position: fixed;
left: 16em;
width: 16em;
height: 100%;
background: url('img/sidebar.png') repeat scroll 0% 0% rgb(17, 78, 121);
overflow-y: auto;
transition: all 0.4s ease 0s;
color: white;
padding-left: 0.5em;
padding-right: 0.5em;
font-size: 0.9em;
z-index: 1000;
}
#sidebar-wrapper a {
color: white;
}
#sidebar-wrapper h2 {
font-weight: normal;
text-align: center;
margin: 0.5em;
}
#sidebar-title {
font-size: 2em;
margin-top: 0.5em;
padding: 0.7em 0.5em;
background-color: rgb(117, 170, 39);
background-image: url("img/sidebar.png");
border-radius: 0.2em;
font-weight: normal;
text-align: center;
border: 1px solid black;
}
#sidebar-title a {
text-decoration: none;
}
#sidebar-tags {
text-align: center;
}
#sidebar-tags .tag {
display: inline;
}
#sidebar-tags .tag img {
width: 20%;
max-width: 4em;
margin: 0.5em 0.5em 1.5em;
}
#sidebar-tags .tag .popup {
position: absolute;
margin-left: -35%;
word-wrap: break-word;
width: 33%;
margin-top: 1em;
color: rgb(117, 170, 39);
background: none repeat scroll 0% 0% rgba(0, 0, 0, 0.9);
padding: 1em;
border-radius: 3px;
box-shadow: 0px 0px 2px rgba(0, 0, 0, 0.5);
opacity: 0;
text-align: center;
transform: scale(0) rotate(-12deg);
transition: all 0.25s ease 0s;
}
#sidebar-tags .tag:hover .popup, #sidebar-tags .tag:focus .popup
{
transform: scale(1) rotate(0);
opacity: 0.8;
}
#sidebar-articles {
opacity: 0.7;
text-align: center;
list-style-type: none;
padding: 0;
}
#sidebar-links {
list-style-type: none;
text-align: center;
padding: 0;
}
#sidebar-links li {
background-color: rgb(117, 170, 39);
background-image: url("img/sidebar.png");
text-align: right;
margin-right: 2em;
padding-right: 1em;
margin-bottom: 1em;
margin-left: -0.5em;
height: 2em;
border-top-right-radius: 0.7em;
border-bottom-right-radius: 0.7em;
border: 1px solid black;
transition: all 0.4s ease 0s;
}
#sidebar-links li:hover {
transform: scale(1.1);
}
/* Articles */
article {
max-width: 70em;
margin: auto;
}
.article {
background-color: white;
margin-left: 4.5em;
padding: 1.3em;
position: relative;
margin-bottom: 3em;
min-height: 5.48em;
}
#articles article:last-child {
margin-bottom: 0;
}
#articles h1, #articles h2, #articles h3, #articles h4, #articles h5 {
font-family: "Lucida Console", Monaco, monospace;
font-weight: normal;
}
article .article_title {
text-align: center;
margin-top: 0.1em;
margin-bottom: 1.5em;
}
#articles {
width: calc(100% - 1.5em);
padding-top: 1.5em;
}
#articles h1 {
margin: 0;
}
.aside_article {
position: absolute;
background-color: white;
font-size: 1.5em;
height: 4.5em;
padding: 0 0.5em;
-webkit-transform-origin: 100% 0;
-webkit-transform: translateX(-100%) translateY(1.2em) rotate(-90deg);
transform-origin: 100% 0;
transform: translateX(-100%) translateY(1.2em) rotate(-90deg);
}
.aside_article p {
display: block;
}
.aside_article .day {
float: right;
margin-bottom: 0.3em;
margin-top: 0.4em;
-webkit-transform: rotate(90deg);
transform: rotate(90deg);
width: 100%;
text-align: center;
}
#articles .date {
font-size: 0.8em;
font-style: italic;
text-align: right;
margin: 0;
}
.archives {
text-align: center;
color: white;
}
.archives a {
color: white;
}
/* Media queries */
@media (max-width: 767px) {
#wrapper {
padding-left: 1.5em;
}
#sidebar-wrapper {
left: 0;
}
#sidebar-wrapper:hover {
left: 16em;
width: 16em;
transition: all 0.4s ease 0s;
}
#sidebar-title {
display: none;
}
}
@media (max-width: 600px) {
.aside_article {
display: none;
}
.article {
margin-left: auto;
}
#header {
display: block;
}
}

18
raw/divers.html Normal file
View File

@ -0,0 +1,18 @@
#include_header_here
<article>
<aside class="aside_article">
<p class="month">Divers</p>
</aside>
<div class="article">
<h1 class="article_title">Liens divers</h1>
<ul>
<li><a href="#base_url/pub/">Divers documents en vrac</a></li>
<li><a href="#base_url/pub/respawn">Mon respawn</a></li>
<li><a href="http://git.phyks.me">Mon dépôt Git, alternatif à Github</a></li>
<li><a href="#base_url/autohebergement.html">Ma doc sur l'autohébergement</a></li>
<li><a href="http://snippet.phyks.me">Mes snippets</a></li>
<li><a href="http://velib.phyks.me">Ma webapp vélib</a> (cf <a href="https://github.com/phyks/BikeInParis">le projet sur Github</a>)</li>
</ul>
</div>
</article>
#include_footer_here

View File

@ -1,30 +1,41 @@
<!doctype html> <!DOCTYPE html>
<html lang="fr"> <html lang="fr">
<head> <head>
<meta charset="utf-8"> <meta charset="utf-8">
<title>@titre</title> <title>@title</title>
<link rel="stylesheet" href="style.css"> <link rel="stylesheet" href="design.css"/>
<link type="text/plain" rel="author" href="humans.txt"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head> </head>
<body> <body>
<div id="wrapper"> <div id="wrapper">
<div id="left"> <!-- Sidebar -->
<h1 id="head_title">Phyks' Blog</h1> <div id="sidebar-wrapper">
<hr/><hr/> <h1 id="sidebar-title"><a href="@blog_url">~Phyks</a></h1>
<h2>Catégories</h2> <h2>Catégories</h2>
<div id="categories"> <nav id="sidebar-tags">
@categories @tags
</div> </nav>
<hr/>
<h2>Derniers articles</h2> <h2>Derniers articles</h2>
<div id="last_articles"> <ul id="sidebar-articles">
@articles @articles
</div> </ul>
<hr/>
<h2>Liens</h2> <h2>Liens</h2>
<ul class="links"> <ul id="sidebar-links">
<li><a href="contact.html">Me contacter</a></li> <li><a href="contact.html" title="Contact">Me contacter</a></li>
<li><a href="http://links.phyks.me">Mon shaarli</a></li> <li class="monospace"><a href="//links.phyks.me" title="Mon Shaarli">find ~phyks -type l</a></li>
<li><a href="http://projet.phyks.me">Mes projets</a></li> <li><a href="https://github.com/phyks/" title="Github">Mon Github</a></li>
<li><a href="divers.html" title="Divers">Divers</a></li>
</ul> </ul>
</div> </div>
<!-- Page content -->
<div id="header">
<h1><a href="@blog_url">~Phyks</a></h1>
</div>
<div id="articles"> <div id="articles">

11
raw/humans.txt Normal file
View File

@ -0,0 +1,11 @@
/* AUTHOR */
Phyks (Lucas Verney)
Website : http://phyks.me
Send me an e-mail : phyks@phyks.me
Or contact me on jabber : phyks@phyks.me
Or meet me on github : https://github.com/phyks/
/* SITE */
Last update: 2013/09/15
Standards: HTML5, CSS3 (valid)
Software: Only open-source software :)

BIN
raw/img/bg.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

BIN
raw/img/sidebar.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

View File

@ -1,9 +1,12 @@
BLOG_TITLE = Phyks' blog BLOG_TITLE = Blog
NB_ARTICLES_INDEX = 20 NB_ARTICLES_INDEX = 20
BLOG_URL = file:///home/lucas/Blog/git/blog/ BLOG_URL = #BLOG_URL
PROTOCOL = http
IGNORE_FILES =
#RSS params #RSS params
WEBMASTER = webmaster@phyks.me (Phyks) WEBMASTER = #EMAIL_URL
LANGUAGE = fr LANGUAGE = fr
DESCRIPTION = DESCRIPTION =
COPYRIGHT = COPYRIGHT =

BIN
raw/tags/test.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB