make sure these pages can be crawled in eventual robots.txt generator

main
Ryan Rix 2022-09-01 15:27:59 -07:00
parent b8997062cd
commit e2520dd50a
8 changed files with 8 additions and 0 deletions

View File

@ -6,6 +6,7 @@
#+AUTO_TANGLE: t
#+filetags: :Project:Arcology:Development:
#+ARCOLOGY_KEY: arcology/fastapi
#+ARCOLOGY_ALLOW_CRAWL: t
I learned a lot in building [[id:1d917282-ecf4-4d4c-ba49-628cbb4bb8cc][The Arcology Project]] the first time around, and now that I have [[id:26762cec-7934-4275-8f0b-731ee0e22e07][Migrated to org-roam v2]] I need to evaluate the project, fix it, and get it running again.

View File

@ -6,6 +6,7 @@
#+filetags: :Project:
#+AUTO_TANGLE: t
#+ARCOLOGY_KEY: arcology/feed-gen
#+ARCOLOGY_ALLOW_CRAWL: t
This module renders an [[https://en.wikipedia.org/wiki/Atom_(web_standard)][ATOM feed]]. It's possible for any page in the Arcology now to define an =#+ARCOLOGY_FEED= keyword, and in doing so create a new route in the [[id:20220225T175638.482695][Arcology Public Router]] which will render an Atom feed. The semantics of the feed more-or-less follow the expectations defined in =ox-rss=: Any heading with an =ID= property and a =PUBDATE= property with an org-mode active timestamp in it will be published to the feed. Any entry with an ID will have a =PUBDATE= added to it by invoking =(org-rss-add-pubdate-property)=.

View File

@ -4,6 +4,7 @@
:END:
#+TITLE: Arcology Automated Database Builder
#+ARCOLOGY_KEY: arcology/db-builder
#+ARCOLOGY_ALLOW_CRAWL: t
#+AUTO_TANGLE: t

View File

@ -4,6 +4,7 @@
#+TITLE: Arcology Poetry Pyproject
#+filetags: :Project:Arcology:
#+ARCOLOGY_KEY: arcology/poetry
#+ARCOLOGY_ALLOW_CRAWL: t
#+AUTO_TANGLE: t
Okay so the [[id:arcology/fastapi][Arcology FastAPI]] package is built with =poetry=. I run the commands, look at the output, and copy it back in here... This is not very ergonomic right now, but I don't have a better idea on how to manage these literately.

View File

@ -5,6 +5,7 @@
#+filetags: :Project:Arcology:
#+ARCOLOGY_KEY: arcology/routing
#+ARCOLOGY_ALLOW_CRAWL: t
#+AUTO_TANGLE: t
This is a way to abstract URL logic between the development domain and the production domain without so much fuss. It's a pair of modules that have an identical export. Each one has:

View File

@ -5,6 +5,7 @@
#+TITLE: Navigating the Arcology Site Graph with SigmaJS
#+filetags: :Project:Arcology:Development:
#+ARCOLOGY_KEY: arcology/sitemaps
#+ARCOLOGY_ALLOW_CRAWL: t
#+AUTO_TANGLE: t

View File

@ -6,6 +6,7 @@
#+AUTO_TANGLE: t
#+FILETAGS: :Arcology:Development:
#+ARCOLOGY_KEY: arcology/sites
#+ARCOLOGY_ALLOW_CRAWL: t
Arcology is designed to allow a writer to publish writing from across their [[id:cce/org-roam][org-roam]] [[id:knowledge_base][Knowledge Base]] to multiple domains, interlinking between them where necessary. Each site can provide its own CSS -- and maybe eventually its own page template -- and some minimal metadata, all of which are stored in dataclasses which are meant to be used as courtesy classes to store data for the HTML generator and link router.

View File

@ -6,6 +6,7 @@
#+AUTO_TANGLE: t
#+filetags: :Arcology:Development:
#+ARCOLOGY_KEY: arcology/index
#+ARCOLOGY_ALLOW_CRAWL: t
* Arcology in Brief