Compare commits

..

No commits in common. "master" and "refactor" have entirely different histories.

16 changed files with 259 additions and 1129 deletions

1
.gitignore vendored
View File

@ -1,3 +1,2 @@
*~
*.swp
blog/

11
LICENSE Normal file
View File

@ -0,0 +1,11 @@
/*
* --------------------------------------------------------------------------------
* "THE NO-ALCOHOL BEER-WARE LICENSE" (Revision 42):
* Phyks (webmaster@phyks.me) wrote this file. As long as you retain this notice you
* can do whatever you want with this stuff (and you can also do whatever you want
* with this stuff without retaining it, but that's not cool...). If we meet some
* day, and you think this stuff is worth it, you can buy me a --beer-- soda in
* return.
* Phyks
* ---------------------------------------------------------------------------------
*/

133
README.md
View File

@ -1,114 +1,53 @@
Blogit
======
This script aims at building a static blog above a git repo. This way, you can
use git abilities to have full archives and backups of your blog. Many scripts
aims at doing this, and this is just one more. I needed something more personal
and fitted to my needs, so I came up with this code. It's not very beautiful,
can mostly be optimized but it's working for me. It may not be fitted to your
needs, but it's up to you to see it.
A git based blogging software. Just as Jekyll and so, it takes your articles as html files and computes them to generate static page and RSS feed to serve behind a webserver. It uses git as a backend file manager (as git provide some useful features like history and hooks) and Python for scripting the conversion process. You can customize the python scripts to handle special tags (ie, not standard HTML tags) just as <code> for example. See params file in raw dir to modify this.
This script is just a python script that should be run as a git hook. You can
see a working version at http://phyks.me and the repository behind it is
publicly viewable at http://git.phyks.me/Blog. You should browse this
repository for example configuration and usage if you are interested by this
script.
This project is still a WIP.
This script has been developped by Phyks (phyks@phyks.me). For any suggestion
or remark, please send me an e-mail.
How it works ?
==============
## Installation
There are three directories under the tree : raw for your raw HTML articles and header/footer, gen (created by the script) for the temporary generated files and blog for the blog folder to serve behind the webserver.
1. Clone this repo.
2. Clear the .git directory and initialize a new empty git repo to store your
blog.
3. Move the `pre-commit.py` script to `.git/hooks/pre-commit`
4. Edit the `raw/params` and edit it to fit your needs.
5. The `raw` folder comes with an example of blog architecture. Delete it and
make your own.
6. You are ready to go !
Articles must be in folders year/month/ARTICLE.html (ARTICLE is whatever you want) and some extras comments must be put in the article file for the script to handle it correctly. See the test.html example file for more info on how to do it.
## Params
You can put a file in "wait mode" and don't publish it yet, just by adding .ignore at the end of its filename; Every file that you put in raw and that is not a .html file is just copied at the same place in the blog dir (to put images in your articles, for example, just put them beside your articles and make a relative link in your HTML article).
Available options in `raw/params` are :
You should change the params file (raw/params) before starting to correctly set your blog url, your email address and the blog title (among other parameters).
* `BLOG_TITLE` : The title of the blog, to display in the <title> element
in rendered pages.
* `NB_ARTICLES_INDEX` : Number of articles to display on index page.
* `BLOG_URL` : Your blog base URL.
* `IGNORE_FILES` : A comma-separated list of files to ignore.
* `WEBMASTER` : Webmaster e-mail to put in RSS feed.
* `LANGUAGE` : Language param to put in RSS feed.
* `DESCRIPTION` : Blog description, for the RSS feed.
* `COPYRIGHT` : Copyright information, for the RSS feed.
* `SEARCH` : comma-separated list of elements to search for and replaced
(custom regex, see code for more info)
* `REPLACE` : corresponding elements for replacement.
When you finish editing an article, just git add it and commit. The pre-commit.py hook will run automatically and generate your working copy.
## Usage
Note about tags : Tags are automatically handled and a page for each tag is automatically generated. A page with all the articles for each month and each year is also automatically generated.
This script will use three folders:
Note : Don't remove gen/ content unless you know what you're doing. These files are temporary files for the blog generation but they are useful to regenerate the RSS feed for example. If you delete them, you may need to regenerate them.
* `raw/` will contain your raw files
* `blog/` will contain the fully generated files to serve _via_ your http
server
* `gen/` will store temporary intermediate files
Important note : This is currently a beta version and the hook isn't set to run automatically for now. You have to manually run pre-commit.py (or move it to .git/hooks but this has never been tested ^^).
All the files you have to edit are located in the `raw/` folder. It contains by
default an example version of a blog. You should start by renaming it and
making your own.
Example of syntax for an article
================================
```HTML
<!--
@tags=*** //put here the tags for your article, comma-separated list
@titre=***i //Title for your article
@author=Phyks //Name of the author (not displayed by now)
@date=23062013-1337 //Date in the format DDMMYYYY-HHMM
->
<article content> (== Whatever you want)</article>
```
Articles can be edited in HTML directly or in Markdown. They must be located in
a `raw/year/month` folder, according to their date of publication and end with
.html for html articles and .md for markdown articles. The basic content of an
article is: ```` <!-- @author=AUTHOR_NAME @date=DDMMYYYY-HHMM @title=TITLE
@tags=TAGS --> CONTENT ````
where TAGS is a comma-separated list. Tags are automatically created if they do
not exist yet. The HTML comment *must* be at the beginning of the document and
is parsed to set the metadata of the article.
CONTENT is then either a HTML string or a markdown formatted one.
You can ignore an article to not make it publicly visible during redaction,
simply by adding a .ignore extension.
LICENSE
=======
TLDR; I don't give a damn to anything you can do using this code. It would just be nice to
quote where the original code comes from.
When you finish editing your article, you can add the files to git and commit,
to launch the script. You can also manually call the script with the
`--force-regen` option if you want to rebuild your entire blog.
## Header file
You can use the `@blog_url` syntax anywhere. It will be replaced by the URL of
the blog, as defined in the parameters (and this is useful to include CSS
etc.).
You can also use `@tags` that will be replaced by the list of tags and
`@articles` for the list of last articles.
## Static files
In static files, in raw folder (such as `divers.html` in the demo code), you
can use `#base_url` that will be replaced by the base url of the blog, as
defined in the parameters. This is useful to make some links.
## Alternatives
There exist many alternatives to this script, but they didn't fit my needs (and
were not all tested) :
* fugitive : http://shebang.ws/fugitive-readme.html
* Jekyll : http://jekyllrb.com/ and Oktopress : http://octopress.org/
* Blogofile : http://www.blogofile.com/
## LICENSE
--------------------------------------------------------------------------------
"THE NO-ALCOHOL BEER-WARE LICENSE" (Revision 42): Phyks
(webmaster@phyks.me) wrote this file. As long as you retain this notice you
can do whatever you want with this stuff (and you can also do whatever you
want with this stuff without retaining it, but that's not cool...). If we
meet some day, and you think this stuff is worth it, you can buy me a
<del>beer</del> soda in return. Phyks
---------------------------------------------------------------------------------
* --------------------------------------------------------------------------------
* "THE NO-ALCOHOL BEER-WARE LICENSE" (Revision 42):
* Phyks (webmaster@phyks.me) wrote this file. As long as you retain this notice you
* can do whatever you want with this stuff (and you can also do whatever you want
* with this stuff without retaining it, but that's not cool...). If we meet some
* day, and you think this stuff is worth it, you can buy me a <del>beer</del> soda
* in return.
* Phyks
* ---------------------------------------------------------------------------------

View File

@ -1,87 +0,0 @@
#!/usr/bin/env python3
from bs4 import BeautifulSoup
from bs4.element import Comment
import os
import base64
import hashlib
import hmac
import requests
import sys
"""
Script to import articles from a blogit blog in Known.
Must be run with a correct API_KEY (see below) and from the `gen/` folder.
"""
def list_directory(path):
fichier = []
for root, dirs, files in os.walk(path):
for i in files:
fichier.append(os.path.join(root, i))
return fichier
def hmac_sha256(message, key):
return base64.b64encode(hmac.new(key.encode("utf-8"),
message.encode("utf-8"),
digestmod=hashlib.sha256)
.digest()).decode("utf-8")
def known_api(username, api_key, type, payload):
headers = {
"X-KNOWN-USERNAME": username,
"X-KNOWN-SIGNATURE": hmac_sha256("/"+type+"/edit", api_key)
}
return requests.post("https://known.phyks.me/"+type+"/edit",
data=payload,
headers=headers)
if len(sys.argv) < 3:
print("Usage: "+sys.argv[0]+" USERNAME API_KEY [file]")
sys.exit()
API_USERNAME = sys.argv[1]
API_KEY = sys.argv[2]
if len(sys.argv) <= 3:
files = [list_directory(i) for i in ["2013", "2014", "2015"]]
else:
files = [sys.argv[3]]
for file in files:
print("Processing file "+file)
with open(file, 'r') as fh:
soup = BeautifulSoup(fh.read())
content = []
for i in soup.div.find('header').next_siblings:
if i.name == "footer":
break
if type(i) != Comment:
content.append(i)
comment = soup.div.findAll(text=lambda text: isinstance(text,
Comment))
comment = [i.strip() for i in comment[0].strip().split('\n')]
for j in comment:
if j.startswith("@title"):
title = j.split("=")[1]
elif j.startswith("@date"):
date = j.split("=")[1]
elif j.startswith("@tags"):
tags = j.split("=")[1]
tags = ', '.join(["#"+i.strip() for i in tags.split(',')])
meta = {
"title": title,
"date": (str(date[4:8])+":"+str(date[2:4])+":"+str(date[0:2]) +
" "+str(date[9:11])+":"+str(date[11:13])+":00"),
"tags": tags,
}
content = ''.join([str(i) for i in content]).strip()
content += "\n<p>"+meta["tags"]+"</p>"
payload = {"body": content,
"title": meta["title"],
"created": meta["date"]}
known_api(API_USERNAME, API_KEY, "entry", payload)

View File

@ -1,22 +1,9 @@
#!/usr/bin/env python3
#!/usr/bin/python
# Blogit script written by Phyks (Lucas Verney) for his personnal use. I
# distribute it with absolutely no warranty, except that it works for me on my
# blog :)
# This script is a pre-commit hook that should be placed in your .git/hooks
# folder to work. Read README file for more info.
# LICENSE :
# -----------------------------------------------------------------------------
# "THE NO-ALCOHOL BEER-WARE LICENSE" (Revision 42):
# Phyks (webmaster@phyks.me) wrote this file. As long as you retain this notice
# you can do whatever you want with this stuff (and you can also do whatever
# you want with this stuff without retaining it, but that's not cool...). If
# we meet some day, and you think this stuff is worth it, you can buy me a
# <del>beer</del> soda in return.
# Phyks
# ----------------------------------------------------------------------------
# TODO : What happens when a file is moved with git ?
# TODO : Test the whole thing
# TODO : What happens when I run it as a hook ?
# TODO : What happens when I commit with -a option ?
import sys
import getopt
@ -25,176 +12,18 @@ import os
import datetime
import subprocess
import re
import locale
import markdown
from email import utils
from hashlib import md5
from functools import cmp_to_key
from time import gmtime, strftime, mktime
from bs4 import BeautifulSoup
# ========================
# Github Flavored Markdown
# ========================
def gfm(text):
# Extract pre blocks.
extractions = {}
def pre_extraction_callback(matchobj):
digest = md5(matchobj.group(0).encode("utf-8")).hexdigest()
extractions[digest] = matchobj.group(0)
return "{gfm-extraction-%s}" % digest
pattern = re.compile(r'<pre>.*?</pre>', re.MULTILINE | re.DOTALL)
text = re.sub(pattern, pre_extraction_callback, text)
# Prevent foo_bar_baz from ending up with an italic word in the middle.
def italic_callback(matchobj):
s = matchobj.group(0)
if list(s).count('_') >= 2:
return s.replace('_', '\_')
return s
text = re.sub(r'^(?! {4}|\t)\w+_\w+_\w[\w_]*', italic_callback, text)
# In very clear cases, let newlines become <br /> tags.
def newline_callback(matchobj):
if len(matchobj.group(1)) == 1:
return matchobj.group(0).rstrip() + ' \n'
else:
return matchobj.group(0)
pattern = re.compile(r'^[\w\<][^\n]*(\n+)', re.MULTILINE)
text = re.sub(pattern, newline_callback, text)
# Insert pre block extractions.
def pre_insert_callback(matchobj):
return '\n\n' + extractions[matchobj.group(1)]
text = re.sub(r'{gfm-extraction-([0-9a-f]{32})\}', pre_insert_callback,
text)
def handle_typography(text):
""" Add non breakable spaces before double punctuation signs."""
text = text.replace(' :', '&nbsp;:')
text = text.replace(' ;', '&nbsp;;')
text = text.replace(' !', '&nbsp;!')
text = text.replace(' ?', '&nbsp;?')
text = text.replace(' /', '&nbsp;/')
return text
return handle_typography(text)
# Test suite.
try:
from nose.tools import assert_equal
except ImportError:
def assert_equal(a, b):
assert a == b, '%r != %r' % (a, b)
def test_single_underscores():
"""Don't touch single underscores inside words."""
assert_equal(
gfm('foo_bar'),
'foo_bar',
)
def test_underscores_code_blocks():
"""Don't touch underscores in code blocks."""
assert_equal(
gfm(' foo_bar_baz'),
' foo_bar_baz',
)
def test_underscores_pre_blocks():
"""Don't touch underscores in pre blocks."""
assert_equal(
gfm('<pre>\nfoo_bar_baz\n</pre>'),
'\n\n<pre>\nfoo_bar_baz\n</pre>',
)
def test_pre_block_pre_text():
"""Don't treat pre blocks with pre-text differently."""
a = '\n\n<pre>\nthis is `a\\_test` and this\\_too\n</pre>'
b = 'hmm<pre>\nthis is `a\\_test` and this\\_too\n</pre>'
assert_equal(
gfm(a)[2:],
gfm(b)[3:],
)
def test_two_underscores():
"""Escape two or more underscores inside words."""
assert_equal(
gfm('foo_bar_baz'),
'foo\\_bar\\_baz',
)
def test_newlines_simple():
"""Turn newlines into br tags in simple cases."""
assert_equal(
gfm('foo\nbar'),
'foo \nbar',
)
def test_newlines_group():
"""Convert newlines in all groups."""
assert_equal(
gfm('apple\npear\norange\n\nruby\npython\nerlang'),
'apple \npear \norange\n\nruby \npython \nerlang',
)
def test_newlines_long_group():
"""Convert newlines in even long groups."""
assert_equal(
gfm('apple\npear\norange\nbanana\n\nruby\npython\nerlang'),
'apple \npear \norange \nbanana\n\nruby \npython \nerlang',
)
def test_newlines_list():
"""Don't convert newlines in lists."""
assert_equal(
gfm('# foo\n# bar'),
'# foo\n# bar',
)
assert_equal(
gfm('* foo\n* bar'),
'* foo\n* bar',
)
# =========
# Functions
# =========
# Test if a variable exists (== isset function in PHP)
# ====================================================
def isset(variable):
return variable in locals() or variable in globals()
# Test wether a variable is an int or not
# =======================================
def isint(variable):
try:
int(variable)
return True
except ValueError:
return False
# List all files in path directory
# Works recursively
# Return files list with path relative to current dir
# ===================================================
def list_directory(path):
fichier = []
for root, dirs, files in os.walk(path):
@ -203,11 +32,8 @@ def list_directory(path):
return fichier
# Return a list with the tags of a given article
# ==============================================
def get_tags(filename):
try:
with open(filename, 'r') as fh:
# Return a list with the tags of a given article (fh)
def get_tags(fh):
tag_line = ''
for line in fh.readlines():
if "@tags=" in line:
@ -219,46 +45,37 @@ def get_tags(filename):
tags = [x.strip() for x in line[line.find("@tags=")+6:].split(",")]
return tags
except IOError:
sys.exit("[ERROR] Unable to open file "+filename+".")
#Return date of an article
# ========================
def get_date(filename):
try:
with open(filename, 'r') as fh:
for line in fh.readlines():
if "@date=" in line:
return line[line.find("@date=")+6:].strip()
sys.exit("[ERROR] Unable to determine date in article "+filename+".")
except IOError:
sys.exit("[ERROR] Unable to open file "+filename+".")
# Return the _number_ latest articles in _dir_ directory
# ======================================================
# Return the number latest articles in dir directory
def latest_articles(directory, number):
try:
latest_articles = subprocess.check_output(["git",
"ls-files",
directory],
universal_newlines=True)
except:
sys.exit("[ERROR] An error occurred when fetching file changes "
"from git.")
latest_articles = latest_articles.strip().split("\n")
latest_articles = [x for x in latest_articles if(isint(x[4:8]) and
(x.endswith(".html") or
x.endswith(".md")))]
latest_articles.sort(key=lambda x: (get_date(x)[4:8], get_date(x)[2:4],
get_date(x)[:2], get_date(x)[9:]),
reverse=True)
return latest_articles[:number]
now = datetime.datetime.now()
counter = 0
latest_articles = []
for i in range(int(now.strftime('%Y')), 0, -1):
if counter >= number:
break
if os.path.isdir(directory+"/"+str(i)):
for j in range(12, 0, -1):
if j < 10:
j = "0"+str(j)
if os.path.isdir(directory+"/"+str(i)+"/"+str(j)):
articles_list = list_directory(directory+str(i)+"/"+str(j))
# Sort by date the articles
articles_list.sort(key=lambda x: os.stat(x).st_mtime)
latest_articles += articles_list[:number-counter]
if len(latest_articles) < number-counter:
counter += len(articles_list)
else:
counter = number
return latest_articles
# Auto create necessary directories to write a file
# =================================================
def auto_dir(path):
directory = os.path.dirname(path)
try:
@ -270,7 +87,6 @@ def auto_dir(path):
# Replace some user specific syntax tags (to repplace smileys for example)
# ========================================================================
def replace_tags(article, search_list, replace_list):
return_string = article
for search, replace in zip(search_list, replace_list):
@ -278,31 +94,6 @@ def replace_tags(article, search_list, replace_list):
return return_string
# Return text in <div class="article"> for rss description
# ========================================================
def get_text_rss(content):
soup = BeautifulSoup(content)
date = soup.find(attrs={'class': 'date'})
date.extract()
title = soup.find(attrs={'class': 'article_title'})
title.extract()
return str(soup.div)
def remove_tags(html):
return ''.join(BeautifulSoup(html).findAll(text=True))
def truncate(text, length=100):
return text[:text.find('.', length) - 1] + ""
# Set locale
locale.setlocale(locale.LC_ALL, '')
# ========================
# Start of the main script
# ========================
try:
opts, args = getopt.gnu_getopt(sys.argv, "hf", ["help", "force-regen"])
except getopt.GetoptError:
@ -328,8 +119,6 @@ for opt, arg in opts:
# Set parameters with params file
search_list = []
replace_list = []
months = ["Janvier", "Février", "Mars", "Avril", "Mai", "Juin", "Juillet",
"Août", "Septembre", "Octobre", "Novembre", "Décembre"]
try:
with open("raw/params", "r") as params_fh:
params = {}
@ -337,18 +126,10 @@ try:
if line.strip() == "" or line.strip().startswith("#"):
continue
option, value = line.split("=", 1)
option = option.strip()
if option == "SEARCH":
search_list = [i.strip() for i in value.split(",")]
search_list = value.strip().split(",")
elif option == "REPLACE":
replace_list = [i.strip() for i in value.split(",")]
elif option == "MONTHS":
months = [i.strip() for i in value.split(",")]
elif option == "IGNORE_FILES":
params["IGNORE_FILES"] = [i.strip() for i in value.split(",")]
elif option == "BLOG_URL":
params["BLOG_URL"] = value.strip(" \n\t\r").rstrip("/")
replace_list = value.strip().split(",")
else:
params[option.strip()] = value.strip()
@ -358,7 +139,6 @@ except IOError:
"parameters. Does such a file exist ? See doc for more info "
"on this file.")
print("[INFO] Blog url is "+params["BLOG_URL"]+".")
# Fill lists for modified, deleted and added files
modified_files = []
@ -396,14 +176,8 @@ if not force_regen:
else:
sys.exit("[ERROR] An error occurred when running git diff.")
else:
try:
shutil.rmtree("blog/")
except FileNotFoundError:
pass
try:
shutil.rmtree("gen/")
except FileNotFoundError:
pass
added_files = list_directory("raw")
if not added_files and not modified_files and not deleted_files:
@ -414,27 +188,23 @@ if not added_files and not modified_files and not deleted_files:
for filename in list(added_files):
direct_copy = False
if (not filename.startswith("raw/") or filename.endswith("~") or
filename in params["IGNORE_FILES"]):
if not filename.startswith("raw/"):
added_files.remove(filename)
continue
try:
int(filename[4:8])
if filename[4:8] not in years_list:
years_list.append(filename[4:8])
except ValueError:
direct_copy = True
try:
int(filename[9:11])
if filename[9:11] not in months_list:
months_list.append(filename[9:11])
int(filename[8:10])
months_list.append(filename[8:10])
except ValueError:
pass
if ((not filename.endswith(".html") and not filename.endswith(".ignore")
and not filename.endswith(".md"))
if ((not filename.endswith(".html") and not filename.endswith(".ignore"))
or direct_copy):
# Note : this deal with CSS, images or footer file
print("[INFO] (Direct copy) Copying directly the file "
@ -454,26 +224,22 @@ for filename in list(added_files):
for filename in list(modified_files):
direct_copy = False
if (not filename.startswith("raw/") or filename.endswith("~")
or filename in params["IGNORE_FILES"]):
if not filename.startswith("raw/"):
modified_files.remove(filename)
continue
try:
int(filename[4:8])
if filename[4:8] not in years_list:
years_list.append(filename[4:8])
except ValueError:
direct_copy = True
try:
int(filename[9:11])
if filename[9:11] not in months_list:
months_list.append(filename[9:11])
int(filename[8:10])
months_list.append(filename[8:10])
except ValueError:
pass
if ((not filename.endswith(".html") and not filename.endswith(".ignore")
and not filename.endswith(".md"))
if ((not filename.endswith("html") and not filename.endswith("ignore"))
or direct_copy):
print("[INFO] (Direct copy) Updating directly the file "
+ filename[4:]+" in blog dir.")
@ -482,7 +248,7 @@ for filename in list(modified_files):
modified_files.remove(filename)
continue
if filename.endswith(".ignore"):
if filename.endswith("ignore"):
print("[INFO] (Not published) Found not published article "
+ filename[4:-7]+".")
added_files.remove(filename)
@ -491,35 +257,27 @@ for filename in list(modified_files):
for filename in list(deleted_files):
direct_copy = False
if (not filename.startswith("raw/") or filename.endswith("~") or
filename in params["IGNORE_FILES"]):
if not filename.startswith("raw/"):
deleted_files.remove(filename)
continue
try:
int(filename[4:8])
if filename[4:8] not in years_list:
years_list.append(filename[4:8])
except ValueError:
direct_delete = True
try:
int(filename[9:11])
if filename[9:11] not in months_list:
months_list.append(filename[9:11])
int(filename[8:10])
months_list.append(filename[8:10])
except ValueError:
pass
if ((not filename.endswith(".html") and not filename.endswith(".ignore")
and not filename.endswith(".md"))
or (isset("direct_delete") and direct_delete is True)):
if ((not filename.endswith("html") and not filename.endswith("ignore"))
or direct_delete):
print("[INFO] (Deleted file) Delete directly copied file "
+ filename[4:]+" in blog dir.")
try:
os.unlink(filename)
except FileNotFoundError:
pass
os.system('git rm '+filename)
deleted_files.remove(filename)
continue
@ -529,7 +287,11 @@ print("[INFO] Deleted filed : "+", ".join(deleted_files))
print("[INFO] Updating tags for added and modified files.")
for filename in added_files:
tags = get_tags(filename)
try:
with open(filename, 'r') as fh:
tags = get_tags(fh)
except IOError:
sys.exit("[ERROR] Unable to open file "+filename+".")
if not tags:
sys.exit("[ERROR] (TAGS) In added article "+filename[4:]+" : "
@ -549,7 +311,8 @@ for filename in added_files:
for filename in modified_files:
try:
tags = get_tags(filename)
with open(filename, 'r') as fh:
tags = get_tags(fh)
except IOError:
sys.exit("[ERROR] Unable to open file "+filename[4:]+".")
@ -567,10 +330,7 @@ for filename in modified_files:
print("[INFO] (TAGS) Found new tag "
+ tag[:tag.index(".tmp")]+" for modified article "
+ filename[4:]+".")
try:
tags.remove(tag[9:])
except ValueError:
pass
tags.remove(tag_file[9:])
if (tag[tag.index("tags/") + 5:tag.index(".tmp")] not in tags
and filename[4:] in tag_file.read()):
tag_old = tag_file.read()
@ -583,7 +343,12 @@ for filename in modified_files:
print("[INFO] (TAGS) Deleted tag " +
tag[:tag.index(".tmp")]+" in modified article " +
filename[4:]+".")
else:
tags.remove(tag_file[9:])
except IOError:
sys.exit("[ERROR] (TAGS) An error occurred when parsing tags "
" of article "+filename[4:]+".")
if not tag_file_write:
try:
os.unlink(tag)
print("[INFO] (TAGS) No more article with tag " +
@ -592,24 +357,12 @@ for filename in modified_files:
print("[INFO] (TAGS) "+tag+" was found to be empty "
"but there was an error during deletion. "
"You should check manually.")
os.system('git rm '+tag)
print(tags)
try:
tags.remove(tag[9:])
except ValueError:
pass
except IOError:
sys.exit("[ERROR] (TAGS) An error occurred when parsing tags "
" of article "+filename[4:]+".")
# New tags created
for tag in [x for x in tags if "gen/tags/"+x+".tmp"
not in list_directory("gen/tags")]:
for tag in tags: # New tags created
try:
auto_dir("gen/tags/"+tag+".tmp")
with open("gen/tags/"+tag+".tmp", "a+") as tag_file:
# Delete tag file here if empty after deletion
tag_file.write(filename[4:]+"\n")
print("[INFO] (TAGS) Found new tag "+tag+" for "
"modified article "+filename[4:]+".")
@ -619,7 +372,11 @@ for filename in modified_files:
# Delete tags for deleted files and delete all generated files
for filename in deleted_files:
tags = os.listdir("gen/tags/")
try:
with open(filename, 'r') as fh:
tags = get_tags(fh)
except IOError:
sys.exit("[ERROR] Unable to open file "+filename+".")
if not tags:
sys.exit("[ERROR] In deleted article "+filename[4:]+" : "
@ -627,7 +384,7 @@ for filename in deleted_files:
for tag in tags:
try:
with open("gen/tags/"+tag, 'r+') as tag_file:
with open("gen/tags/"+tag+".tmp", 'r+') as tag_file:
tag_old = tag_file.read()
tag_file.truncate()
# Delete file in tag
@ -651,18 +408,15 @@ for filename in deleted_files:
print("[INFO] (TAGS) "+tag+" was found to be empty "
"but there was an error during deletion. "
"You should check manually.")
os.system('git rm '+filename)
# Delete generated files
try:
os.unlink("gen/"+filename[4:filename.rfind('.')]+".gen")
os.unlink("gen/"+filename[4:-5]+".gen")
os.unlink("blog/"+filename[4:])
except FileNotFoundError:
print("[INFO] (DELETION) Article "+filename[4:]+" seems "
"to not have already been generated. "
"You should check manually.")
os.system("git rm gen/"+filename[4:filename.rfind('.')]+".gen")
os.system("git rm blog/"+filename[4:])
print("[INFO] (DELETION) Deleted article "+filename[4:] +
" in both gen and blog directories")
@ -672,11 +426,11 @@ for filename in deleted_files:
last_articles = latest_articles("raw/", int(params["NB_ARTICLES_INDEX"]))
tags_full_list = list_directory("gen/tags")
# Generate html for each article (gen/ dir)
# Generate html for each article
for filename in added_files+modified_files:
try:
with open(filename, 'r') as fh:
article, title, date, author, tags = "", "", "", "", ""
article = "", "", "", "", ""
for line in fh.readlines():
article += line
if "@title=" in line:
@ -701,65 +455,32 @@ for filename in added_files+modified_files:
date_readable = ("Le "+date[0:2]+"/"+date[2:4]+"/"+date[4:8] +
" à "+date[9:11]+":"+date[11:13])
day_aside = date[0:2]
month_aside = months[int(date[2:4]) - 1]
tags_comma = ""
tags = [i.strip() for i in tags.split(",")]
for tag in tags:
if tags_comma != "":
tags_comma += ", "
tags_comma += ("<a href=\""+params["BLOG_URL"] +
"/tags/"+tag+".html\">"+tag+"</a>")
# Markdown support
if filename.endswith(".md"):
article = markdown.markdown(gfm(article))
# Write generated HTML for this article in gen /
article = replace_tags(article, search_list, replace_list)
# Handle @article_path
article_path = params["BLOG_URL"] + "/" + date[4:8] + "/" + date[2:4]
article = article.replace("@article_path", article_path)
try:
auto_dir("gen/"+filename[4:filename.rfind('.')]+".gen")
with open("gen/"+filename[4:filename.rfind('.')]+".gen", 'w') as article_file:
auto_dir("gen/"+filename[4:-5]+".gen")
with open("gen/"+filename[4:-5]+".gen", 'w') as article_file:
article_file.write("<article>\n"
"\t<aside>\n"
"\t\t<p class=\"day\">"+day_aside+"</p>\n"
"\t\t<p class=\"month\">"+month_aside+"</p>\n"
"\t</aside>\n"
"\t<nav class=\"aside_article\"></nav>\n"
"\t<div class=\"article\">\n"
"\t\t<header><h1 class=\"article_title\"><a " +
"href=\""+params["BLOG_URL"]+"/"+filename[4:filename.rfind('.')]+'.html' +
"\">"+title+"</a></h1></header>\n"
"\t\t<h1>"+title+"</h1>\n"
"\t\t"+article+"\n"
"\t\t<footer><p class=\"date\">"+date_readable +
"</p>\n"
"\t\t<p class=\"tags\">Tags : "+tags_comma +
"</p></footer>\n"
"\t</div>\n"
"</article>\n")
"\t\t<p class=\"date\">"+date+"</p>\n"
"\t</div>\n")
print("[INFO] (GEN ARTICLES) Article "+filename[4:]+" generated")
except IOError:
sys.exit("[ERROR] An error occurred when writing generated HTML for "
"article "+filename[4:]+".")
# Starting to generate header file (except title)
tags_header = ""
for tag in sorted(tags_full_list, key=cmp_to_key(locale.strcoll)):
with open("gen/tags/"+tag[9:-4]+".tmp", "r") as tag_fh:
nb = len(tag_fh.readlines())
tags_header += "<div class=\"tag\">"
tags_header += ("<a href=\""+params["BLOG_URL"] +
"/tags/"+tag[9:-4]+".html\">")
tags_header += ("/"+tag[9:-4]+" ("+str(nb)+")")
tags_header += ("</a> ")
tags_header += "</div>"
tags_header = "<ul>"
for tag in tags_full_list:
tags_header += "<li>"
tags_header += ("<a href=\""+params["BLOG_URL"]+tag[4:-4]+".html\">" +
tag[9:-4]+"</a>")
tags_header += "</li>"
tags_header += "</ul>"
try:
with open("raw/header.html", "r") as header_fh:
header = header_fh.read()
@ -767,36 +488,29 @@ except IOError:
sys.exit("[ERROR] Unable to open raw/header.html file.")
header = header.replace("@tags", tags_header, 1)
header = header.replace("@blog_url", params["BLOG_URL"])
articles_header = ""
articles_index = ""
header = header.replace("@blog_url", params["BLOG_URL"], 1)
articles_header = "<ul>"
articles_index = "<ul>"
rss = ("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n")
if os.path.isfile("raw/rss.css"):
rss += ("<?xml-stylesheet type=\"text/css\" " +
"href=\""+params["PROTOCOL"]+params["BLOG_URL"]+"/rss.css\"?>\n")
rss += ("<rss version=\"2.0\" xmlns:atom=\"http://www.w3.org/2005/Atom\" "
rss = ("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n"
"<rss version=\"2.0\" xmlns:atom=\"http://www.w3.org/2005/Atom\" "
"xmlns:content=\"http://purl.org/rss/1.0/modules/content/\">\n")
rss += ("\t<channel>"
"\t\t<atom:link href=\""+params["PROTOCOL"]+params["BLOG_URL"] +
"/rss.xml\" rel=\"self\" type=\"application/rss+xml\"/>\n"
"\t\t<atom:link href=\""+params["BLOG_URL"]+"rss.xml\" "
"rel=\"self\" type=\"application/rss+xml\"/>\n"
"\t\t<title>"+params["BLOG_TITLE"]+"</title>\n"
"\t\t<link>"+params["PROTOCOL"] + params["BLOG_URL"]+"</link>\n"
"\t\t<link>"+params["BLOG_URL"]+"</link>\n"
"\t\t<description>"+params["DESCRIPTION"]+"</description>\n"
"\t\t<language>"+params["LANGUAGE"]+"</language>\n"
"\t\t<copyright>"+params["COPYRIGHT"]+"</copyright>\n"
"\t\t<webMaster>"+params["WEBMASTER"]+"</webMaster>\n"
"\t\t<lastBuildDate>" +
utils.formatdate(mktime(gmtime()))+"</lastBuildDate>\n")
strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime())+"</lastBuildDate>\n")
# Generate header (except title) + index file + rss file
for i, article in enumerate(["gen/"+x[4:x.rfind('.')]+".gen" for x in last_articles]):
for i, article in enumerate(last_articles):
content, title, tags, date, author = "", "", "", "", ""
content_desc = ""
try:
with open(article, "r") as fh:
for line in fh.readlines():
@ -813,7 +527,6 @@ for i, article in enumerate(["gen/"+x[4:x.rfind('.')]+".gen" for x in last_artic
if "@tags=" in line:
tags = line[line.find("@tags=")+6:].strip()
continue
content_desc += line
except IOError:
sys.exit("[ERROR] Unable to open "+article+" file.")
@ -822,44 +535,36 @@ for i, article in enumerate(["gen/"+x[4:x.rfind('.')]+".gen" for x in last_artic
if i < 5:
articles_header += "<li>"
articles_header += ("<a href=\""+params["BLOG_URL"] + "/" +
articles_header += ("<a href=\""+params["BLOG_URL"] +
article[4:-4]+".html\">"+title+"</a>")
articles_header += "</li>"
articles_index += content
date_rss = utils.formatdate(mktime(gmtime(mktime(datetime.
datetime.
strptime(date,
"%d%m%Y-%H%M")
.timetuple()))))
articles_index += "<li>"
articles_index += ("<a href=\""+params["BLOG_URL"] +
article[4:-4]+".html\">"+title+"</a>")
articles_index += "</li>"
rss += ("\t\t<item>\n" +
"\t\t\t<title>"+remove_tags(title)+"</title>\n" +
"\t\t\t<link>"+params["PROTOCOL"]+params["BLOG_URL"]+"/" +
article[4:-4]+".html</link>\n" +
"\t\t\t<guid isPermaLink=\"true\">" +
params["PROTOCOL"] + params["BLOG_URL"]+"/"+article[4:-4]+".html</guid>\n"
# Apply remove_tags twice to also remove tags in @title and so
"\t\t\t<description>" + truncate(remove_tags(remove_tags(replace_tags(get_text_rss(content_desc),
search_list,
replace_list)))) +
"</description>\n" +
"\t\t\t<content:encoded><![CDATA[" +
replace_tags(get_text_rss(content),
search_list,
replace_list).replace('"'+params['BLOG_URL'],
'"'+params['BLOG_URL_RSS']) +
"]]></content:encoded>\n" +
"\t\t\t<pubDate>"+date_rss+"</pubDate>\n" +
("\n".join(["\t\t\t<category>" + i.strip() + "</category>"
for i in tags.split(",")]))+"\n" +
"\t\t\t<author>"+params["WEBMASTER"]+"</author>\n" +
date_rss = strftime("%a, %d %b %Y %H:%M:%S +0000",
gmtime(mktime(datetime.datetime.strptime(date,
"%d%m%Y-%H%M")
.timetuple())))
rss += ("\t\t<item>\n"
"\t\t\t<title>"+title+"</title>\n"
"\t\t\t<link>"+params["BLOG_URL"]+article[5:]+"</link>\n"
"\t\t\t<guid isPermaLink=\"false\">" +
params["BLOG_URL"]+article[5:]+"</guid>\n"
"\t\t\t<description><![CDATA[" +
replace_tags(article, search_list, replace_list) +
"]]></description>\n"
"\t\t\t<pubDate>"+date_rss+"</pubDate>\n"
"\t\t\t<category>"+', '.join(tags)+"</category>\n"
"\t\t\t<author>"+params["WEBMASTER"]+"</author>\n"
"\t\t</item>\n")
# Finishing header gen
articles_header += ("<li><a "+"href=\""+params["BLOG_URL"] +
"/archives.html\">"+"Archives</a></li>")
articles_header += "</ul>"
header = header.replace("@articles", articles_header, 1)
try:
@ -876,15 +581,14 @@ except IOError:
try:
with open("raw/footer.html", "r") as footer_fh:
footer = footer_fh.read()
footer = footer.replace("@blog_url", params["BLOG_URL"])
except IOError:
sys.exit("[ERROR] An error occurred while parsing footer "
"file raw/footer.html.")
# Finishing index gen
articles_index += "</ul>"
index = (header.replace("@title", params["BLOG_TITLE"], 1) +
articles_index + "<p class=\"archives\"><a "+"href=\"" +
params["BLOG_URL"]+"/archives.html\">Archives</a></p>"+footer)
articles_index + footer)
try:
with open("blog/index.html", "w") as index_fh:
@ -909,16 +613,10 @@ for tag in tags_full_list:
tag_content = header.replace("@title", params["BLOG_TITLE"] +
" - "+tag[4:-4], 1)
# Sort by date
with open(tag, "r") as tag_gen_fh:
articles_list = ["gen/"+line.replace(".html", ".gen").replace('.md', '.gen').strip() for line
in tag_gen_fh.readlines()]
articles_list.sort(key=lambda x: (get_date(x)[4:8], get_date(x)[2:4],
get_date(x)[:2], get_date(x)[9:]),
reverse=True)
for article in articles_list:
with open(article.strip(), "r") as article_fh:
for line in tag_gen_fh.readlines():
line = line.replace(".html", ".gen")
with open("gen/"+line.strip(), "r") as article_fh:
tag_content += article_fh.read()
tag_content += footer
@ -932,32 +630,37 @@ for tag in tags_full_list:
sys.exit("[ERROR] An error occurred while generating tag page \"" +
tag[9:-4]+"\"")
# Finish generating HTML for articles (blog/ dir)
for article in added_files+modified_files:
# Finish articles pages generation
for filename in added_files+modified_files:
try:
with open("gen/"+article[4:article.rfind('.')]+".gen", "r") as article_fh:
content = article_fh.read()
auto_dir("blog/"+filename[4:])
with open("blog/"+filename[4:], "w") as article_fh:
with open("gen/header.gen", "r") as header_gen_fh:
article = header_gen_fh.read()
with open("gen/"+filename[4:-5]+".gen", "r") as article_gen_fh:
line = article_gen_fh.readline()
while "@title" not in line:
line = article_gen_fh.readline()
line = line.strip()
title_pos = line.find("@title=")
title = line[title_pos+7:]
article_gen_fh.seek(0)
article = article.replace("@title", params["BLOG_TITLE"] +
" - "+title, 1)
article += replace_tags(article_gen_fh.read(),
search_list,
replace_list)
with open("gen/footer.gen", "r") as footer_gen_fh:
article += footer_gen_fh.read()
article_fh.write(article)
print("[INFO] (ARTICLES) Article page for "+filename[4:] +
" has been generated successfully.")
except IOError:
sys.exit("[ERROR] An error occurred while opening"
"gen/"+article[4:article.rfind('.')]+".gen file.")
sys.exit("[ERROR] An error occurred while generating article " +
filename[4:]+" page.")
for line in content.split("\n"):
if "@title=" in line:
title = line[line.find("@title=")+7:].strip()
break
content = header.replace("@title", params["BLOG_TITLE"] + " - " +
title, 1) + content + footer
try:
auto_dir("blog/"+article[4:article.rfind('.')]+'.html')
with open("blog/"+article[4:article.rfind('.')]+'.html', "w") as article_fh:
article_fh.write(content)
print("[INFO] (GEN ARTICLES) HTML file generated in blog dir for "
"article "+article[4:article.rfind('.')]+'.html'+".")
except IOError:
sys.exit("[ERROR] Unable to write blog/"+article[4:article.rfind('.')]+'.html'+" file.")
# Regenerate pages for years / months
# Regenerate page for years / months
years_list.sort(reverse=True)
for i in years_list:
try:
@ -965,7 +668,7 @@ for i in years_list:
except ValueError:
continue
# Generate pages per year
# Generate page per year
page_year = header.replace("@title", params["BLOG_TITLE"]+" - "+i, 1)
months_list.sort(reverse=True)
@ -978,10 +681,7 @@ for i in years_list:
params["BLOG_TITLE"]+" - "+i+"/"+j, 1)
articles_list = list_directory("gen/"+i+"/"+j)
articles_list.sort(key=lambda x: (get_date(x)[4:8], get_date(x)[2:4],
get_date(x)[:2], get_date(x)[9:]),
reverse=True)
articles_list.sort(key=lambda x: os.stat(x).st_mtime, reverse=True)
for article in articles_list:
try:
with open(article, "r") as article_fh:
@ -1008,72 +708,3 @@ for i in years_list:
page_year_fh.write(page_year)
except IOError:
sys.exit("[ERROR] Unable to write index file for "+i+".")
# Generate archive page
archives = header.replace("@title", params["BLOG_TITLE"]+" - Archives", 1)
years_list = os.listdir("blog/")
years_list.sort(reverse=True)
archives += ("<article><div class=\"article\"><h1 " +
"class=\"article_title\">Archives</h1><ul>")
for i in years_list:
if not os.path.isdir("blog/"+i):
continue
try:
int(i)
except ValueError:
continue
archives += "<li><a href=\""+params["BLOG_URL"]+"/"+i+"\">"+i+"</a></li>"
archives += "<ul>"
months_list = os.listdir("blog/"+i)
months_list.sort(reverse=True)
for j in months_list:
if not os.path.isdir("blog/"+i+"/"+j):
continue
archives += ("<li><a href=\""+params["BLOG_URL"] + "/" + i +
"/"+j+"\">"+datetime.datetime.
strptime(j, "%m").strftime("%B").title()+"</a></li>")
archives += "</ul>"
archives += "</ul></div></article>"
archives += footer
try:
with open("blog/archives.html", "w") as archives_fh:
archives_fh.write(archives)
except IOError:
sys.exit("[ERROR] Unable to write blog/archives.html file.")
# Include header and footer for pages that need it
for i in os.listdir("blog/"):
if (os.path.isdir("blog/"+i) or i in ["header.html", "footer.html",
"rss.xml", "style.css", "index.html",
"archives.html", "humans.txt"]):
continue
if not i.endswith(".html"):
continue
with open("blog/"+i, 'r+') as fh:
content = fh.read()
fh.seek(0)
if content.find("#include_header_here") != -1:
content = content.replace("#include_header_here",
header.replace("@title",
(params["BLOG_TITLE"] +
" - "+i[:i.rfind('.')].title()),
1),
1)
fh.write(content)
fh.seek(0)
if content.find("#include_footer_here") != -1:
fh.write(content.replace("#include_footer_here", footer, 1))
os.system("git add --ignore-removal blog/ gen/")

View File

@ -4,7 +4,7 @@
@title=Un exemple d'article
@tags=test
-->
<p>1Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus, neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi, tincidunt quis, accumsan porttitor, facilisis luctus, metus</p>
<p>Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus, neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi, tincidunt quis, accumsan porttitor, facilisis luctus, metus</p>
<p>Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus, neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi, tincidunt quis, accumsan porttitor, facilisis luctus, metus</p>

View File

@ -1,29 +0,0 @@
#include_header_here
<article>
<aside class="aside_article">
<p class="month">Phyks</p>
</aside>
<div class="article">
<h1 class="article_title">Contact</h1>
<h2>E-mail</h2>
<p>[FR] Vous pouvez me contacter par e-mail à l'adresse suivante (pseudo@domaine.me) :</p>
<p>[EN] You can contact me using the following e-mail address (nick@domain.me) :</p>
<p class="center"><span class="contact_e-mail">@</span></p>
<h2>Jabber</h2>
<p>[FR] Vous pouvez également me joindre sur Jabber :</p>
<p>[EN] I'm also available very often on Jabber :</p>
<p class="center"><span class="contact_e-mail">@</span></p>
<h2>Divers</h2>
<ul>
<li>Mon <a href="https://github.com/phyks/">profil Github</a>.</li>
<li>[FR] Tous les codes que j'écris et les articles de ce blog sont sous licence <em>BEERWARE</em> (sauf mention contraire). Vous êtes libres de faire tout ce que vous voulez avec. Si vous souhaitez me soutenir, le meilleur moyen reste de liartager ces informations autour de vous (et de citer la source :). Vous liouvez également me liayer <del>une bière</del> un soda <em>via</em> Flattr ou tout autre moyen qui vous convient.</li>
<li>[EN] All my source codes and articles on my blog are under a <em>BEERWARE</em> license (except if anything special is specified). You are free to do whatever you want with them. If you want to support me, the best way is to share these pieces of information around you (and to cite the source :). You can also pay me a <del>beer</del> soda <em>via</em> Flattr or any mean you want.</li>
</ul>
</div>
</article>
#include_footer_here

View File

@ -1,291 +0,0 @@
html, body {
margin: 0;
padding: 0;
background-color: rgb(35, 34, 34);
background-image: url('img/bg.png');
font-family: "DejaVu Sans", Verdana, "Bitstream Vera Sans", Geneva, sans-serif;
line-height: 1.5em;
text-align: justify;
}
/* General classes */
.monospace {
font-family: "Lucida Console", Monaco, monospace;
}
.center {
text-align: center;
}
.contact_e-mail:before {
unicode-bidi: bidi-override;
direction: rtl;
content: "em.skyhp";
}
.contact_e-mail:after {
unicode-bidi: bidi-override;
direction: rtl;
content: "skyhp";
}
/* Wrapper */
#wrapper {
padding-left: 17em;
transition: all 0.4s ease 0s;
}
/* Hide the header and display it only in responsive view */
#header {
display: none;
text-align: center;
width: 50%;
margin: auto;
font-size: 0.9em;
padding: 0.3em;
}
#header h1 {
font-weight: normal;
padding: 0;
margin: 0;
margin-top: 0.5em;
background-color: rgb(117, 170, 39);
background-image: url("img/sidebar.png");
border: 1px solid black;
border-radius: 0.2em;
padding: 0.6em;
}
#header a {
color: white;
text-decoration: none;
}
/* Sidebar */
#sidebar-wrapper {
margin-left: -16em;
position: fixed;
left: 16em;
width: 16em;
height: 100%;
background: url('img/sidebar.png') repeat scroll 0% 0% rgb(17, 78, 121);
overflow-y: auto;
transition: all 0.4s ease 0s;
color: white;
padding-left: 0.5em;
padding-right: 0.5em;
font-size: 0.9em;
z-index: 1000;
}
#sidebar-wrapper a {
color: white;
}
#sidebar-wrapper h2 {
font-weight: normal;
text-align: center;
margin: 0.5em;
}
#sidebar-title {
font-size: 2em;
margin-top: 0.5em;
padding: 0.7em 0.5em;
background-color: rgb(117, 170, 39);
background-image: url("img/sidebar.png");
border-radius: 0.2em;
font-weight: normal;
text-align: center;
border: 1px solid black;
}
#sidebar-title a {
text-decoration: none;
}
#sidebar-tags {
text-align: center;
}
#sidebar-tags .tag {
display: inline;
}
#sidebar-tags .tag img {
width: 20%;
max-width: 4em;
margin: 0.5em 0.5em 1.5em;
}
#sidebar-tags .tag .popup {
position: absolute;
margin-left: -35%;
word-wrap: break-word;
width: 33%;
margin-top: 1em;
color: rgb(117, 170, 39);
background: none repeat scroll 0% 0% rgba(0, 0, 0, 0.9);
padding: 1em;
border-radius: 3px;
box-shadow: 0px 0px 2px rgba(0, 0, 0, 0.5);
opacity: 0;
text-align: center;
transform: scale(0) rotate(-12deg);
transition: all 0.25s ease 0s;
}
#sidebar-tags .tag:hover .popup, #sidebar-tags .tag:focus .popup
{
transform: scale(1) rotate(0);
opacity: 0.8;
}
#sidebar-articles {
opacity: 0.7;
text-align: center;
list-style-type: none;
padding: 0;
}
#sidebar-links {
list-style-type: none;
text-align: center;
padding: 0;
}
#sidebar-links li {
background-color: rgb(117, 170, 39);
background-image: url("img/sidebar.png");
text-align: right;
margin-right: 2em;
padding-right: 1em;
margin-bottom: 1em;
margin-left: -0.5em;
height: 2em;
border-top-right-radius: 0.7em;
border-bottom-right-radius: 0.7em;
border: 1px solid black;
transition: all 0.4s ease 0s;
}
#sidebar-links li:hover {
transform: scale(1.1);
}
/* Articles */
article {
max-width: 70em;
margin: auto;
}
.article {
background-color: white;
margin-left: 4.5em;
padding: 1.3em;
position: relative;
margin-bottom: 3em;
min-height: 5.48em;
}
#articles article:last-child {
margin-bottom: 0;
}
#articles h1, #articles h2, #articles h3, #articles h4, #articles h5 {
font-family: "Lucida Console", Monaco, monospace;
font-weight: normal;
}
article .article_title {
text-align: center;
margin-top: 0.1em;
margin-bottom: 1.5em;
}
#articles {
width: calc(100% - 1.5em);
padding-top: 1.5em;
}
#articles h1 {
margin: 0;
}
.aside_article {
position: absolute;
background-color: white;
font-size: 1.5em;
height: 4.5em;
padding: 0 0.5em;
-webkit-transform-origin: 100% 0;
-webkit-transform: translateX(-100%) translateY(1.2em) rotate(-90deg);
transform-origin: 100% 0;
transform: translateX(-100%) translateY(1.2em) rotate(-90deg);
}
.aside_article p {
display: block;
}
.aside_article .day {
float: right;
margin-bottom: 0.3em;
margin-top: 0.4em;
-webkit-transform: rotate(90deg);
transform: rotate(90deg);
width: 100%;
text-align: center;
}
#articles .date {
font-size: 0.8em;
font-style: italic;
text-align: right;
margin: 0;
}
.archives {
text-align: center;
color: white;
}
.archives a {
color: white;
}
/* Media queries */
@media (max-width: 767px) {
#wrapper {
padding-left: 1.5em;
}
#sidebar-wrapper {
left: 0;
}
#sidebar-wrapper:hover {
left: 16em;
width: 16em;
transition: all 0.4s ease 0s;
}
#sidebar-title {
display: none;
}
}
@media (max-width: 600px) {
.aside_article {
display: none;
}
.article {
margin-left: auto;
}
#header {
display: block;
}
}

View File

@ -1,18 +0,0 @@
#include_header_here
<article>
<aside class="aside_article">
<p class="month">Divers</p>
</aside>
<div class="article">
<h1 class="article_title">Liens divers</h1>
<ul>
<li><a href="#base_url/pub/">Divers documents en vrac</a></li>
<li><a href="#base_url/pub/respawn">Mon respawn</a></li>
<li><a href="http://git.phyks.me">Mon dépôt Git, alternatif à Github</a></li>
<li><a href="#base_url/autohebergement.html">Ma doc sur l'autohébergement</a></li>
<li><a href="http://snippet.phyks.me">Mes snippets</a></li>
<li><a href="http://velib.phyks.me">Ma webapp vélib</a> (cf <a href="https://github.com/phyks/BikeInParis">le projet sur Github</a>)</li>
</ul>
</div>
</article>
#include_footer_here

View File

@ -1,41 +1,30 @@
<!DOCTYPE html>
<!doctype html>
<html lang="fr">
<head>
<meta charset="utf-8">
<title>@title</title>
<link rel="stylesheet" href="design.css"/>
<link type="text/plain" rel="author" href="humans.txt"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>@titre</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<div id="wrapper">
<!-- Sidebar -->
<div id="sidebar-wrapper">
<h1 id="sidebar-title"><a href="@blog_url">~Phyks</a></h1>
<div id="left">
<h1 id="head_title">Phyks' Blog</h1>
<hr/><hr/>
<h2>Catégories</h2>
<nav id="sidebar-tags">
@tags
</nav>
<div id="categories">
@categories
</div>
<hr/>
<h2>Derniers articles</h2>
<ul id="sidebar-articles">
<div id="last_articles">
@articles
</ul>
</div>
<hr/>
<h2>Liens</h2>
<ul id="sidebar-links">
<li><a href="contact.html" title="Contact">Me contacter</a></li>
<li class="monospace"><a href="//links.phyks.me" title="Mon Shaarli">find ~phyks -type l</a></li>
<li><a href="https://github.com/phyks/" title="Github">Mon Github</a></li>
<li><a href="divers.html" title="Divers">Divers</a></li>
<ul class="links">
<li><a href="contact.html">Me contacter</a></li>
<li><a href="http://links.phyks.me">Mon shaarli</a></li>
<li><a href="http://projet.phyks.me">Mes projets</a></li>
</ul>
</div>
<!-- Page content -->
<div id="header">
<h1><a href="@blog_url">~Phyks</a></h1>
</div>
<div id="articles">

View File

@ -1,11 +0,0 @@
/* AUTHOR */
Phyks (Lucas Verney)
Website : http://phyks.me
Send me an e-mail : phyks@phyks.me
Or contact me on jabber : phyks@phyks.me
Or meet me on github : https://github.com/phyks/
/* SITE */
Last update: 2013/09/15
Standards: HTML5, CSS3 (valid)
Software: Only open-source software :)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

View File

@ -1,12 +1,9 @@
BLOG_TITLE = Blog
BLOG_TITLE = Phyks' blog
NB_ARTICLES_INDEX = 20
BLOG_URL = #BLOG_URL
PROTOCOL = http
IGNORE_FILES =
BLOG_URL = file:///home/lucas/Blog/git/blog/
#RSS params
WEBMASTER = #EMAIL_URL
WEBMASTER = webmaster@phyks.me (Phyks)
LANGUAGE = fr
DESCRIPTION =
COPYRIGHT =

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB