lister automatiquement tous les cours (slide.channel)
Option A — cURL (JSON-RPC “brut”)
Remplacez domaine, DB, user et mot de passe.
- Authentifiez-vous pour obtenir uid :
curl -s https://votre-domaine.odoo.com/jsonrpc \ -H "Content-Type: application/json" \ -d '{ "jsonrpc":"2.0","method":"call","id":1, "params":{"service":"common","method":"login","args":["VOTRE_DB","user@example.com","VOTRE_MDP"]} }'
Notez la valeur result → c’est votre UID.
- Liste de tous les cours (quelques champs clés) :
curl -s https://votre-domaine.odoo.com/jsonrpc \ -H "Content-Type: application/json" \ -d '{ "jsonrpc":"2.0","method":"call","id":2, "params":{ "service":"object","method":"execute_kw","args":[ "VOTRE_DB", UID, "VOTRE_MDP", "slide.channel", "search_read", [[]], {"fields":["id","name","category","website_published","sequence","description"], "order":"sequence asc"} ] } }'
Option B — Script Python prêt à l’emploi
Ce script :
- se connecte en JSON-RPC,
- récupère tous les slide.channel,
- calcule pour chaque cours le nombre total de slides, le nombre de sections (chapitres) et de contenus,
- affiche un tableau et peut écrire un CSV.
import requests, csv, json, sys # === PARAMÈTRES À ADAPTER === BASE = "https://votre-domaine.odoo.com/jsonrpc" DB = "VOTRE_DB" USER = "user@example.com" PWD = "VOTRE_MDP" WRITE_CSV = True CSV_PATH = "odoo_courses.csv" # ============================ def rpc(service, method, *args): payload = {"jsonrpc":"2.0","method":"call","id":1, "params":{"service":service,"method":method,"args":list(args)}} r = requests.post(BASE, json=payload, timeout=60) r.raise_for_status() data = r.json() if "error" in data: raise RuntimeError(data["error"]) return data["result"] # 1) login → uid uid = rpc("common", "login", DB, USER, PWD) if not isinstance(uid, int): print("Échec de connexion (uid invalide).", file=sys.stderr) sys.exit(1) def call(model, method, *args, **kwargs): full_args = [DB, uid, PWD, model, method] + list(args) if kwargs: full_args.append(kwargs) return rpc("object", "execute_kw", *full_args) # 2) Récupérer tous les cours fields = ["id","name","category","website_published","sequence","description"] channels = call("slide.channel", "search_read", [[]], {"fields": fields, "order": "sequence asc"}) # 3) Enrichir avec les compteurs (slides / sections / contenus) result_rows = [] for ch in channels: channel_id = ch["id"] total_slides = call("slide.slide", "search_count", [[["channel_id","=",channel_id]]]) sections_cnt = call("slide.slide", "search_count", [[["channel_id","=",channel_id],["is_category","=",True]]]) contents_cnt = call("slide.slide", "search_count", [[["channel_id","=",channel_id],["is_category","=",False]]]) row = { "id": channel_id, "name": ch.get("name"), "category": ch.get("category"), "website_published": ch.get("website_published"), "sequence": ch.get("sequence"), "total_slides": total_slides, "sections": sections_cnt, "contents": contents_cnt, "description": (ch.get("description") or "").strip()[:200] # aperçu } result_rows.append(row) # 4) Affichage console (table simple) def fmt_bool(b): return "✓" if b else "—" print("\n== Cours Odoo eLearning ==") print(f"{'ID':<5} {'Nom':<40} {'Pub?':<5} {'Slides':<6} {'Sections':<8} {'Contenus':<8} {'Catégorie':<12}") for r in result_rows: print(f"{r['id']:<5} {r['name'][:38]:<40} {fmt_bool(r['website_published']):<5} " f"{r['total_slides']:<6} {r['sections']:<8} {r['contents']:<8} {str(r['category'])[:10]:<12}") # 5) Export CSV (optionnel) if WRITE_CSV: cols = ["id","name","category","website_published","sequence","total_slides","sections","contents","description"] with open(CSV_PATH, "w", newline="", encoding="utf-8") as f: w = csv.DictWriter(f, fieldnames=cols) w.writeheader() w.writerows(result_rows) print(f"\nCSV écrit → {CSV_PATH}") # 6) JSON (si utile) #print(json.dumps(result_rows, ensure_ascii=False, indent=2))
Astuces
- Pour filtrer (ex. seulement les cours publiés) : domaine [('website_published','=',True)] au lieu de [].
- Ajoutez d’autres champs de slide.channel si besoin (user_id, enroll, tag_ids, etc.).
- Si vous souhaitez aussi récupérer l’URL publique de chaque cours côté site web, vous pouvez lire website_url (selon version/paramétrage), ou la construire à partir du slug de canal si disponible.
Variante qui exporte aussi la hiérarchie complète (Cours → Sections → Contenus) dans un JSON/CSV unique
Voici un script Python clef-en-main qui exporte la hiérarchie complète Odoo eLearning (Cours → Sections → Contenus) :
- JSON structuré par cours (hierarchy.json)
- CSV plat unique avec un enregistrement par section et par contenu (hierarchy.csv) — pratique pour Excel/analyses.
Il utilise l’API JSON-RPC (/jsonrpc), gère l’ordre d’affichage via sequence, et lit tous les enregistrements en lots (pagination).
Script Python (requests)
import requests, json, csv, sys, math, time from typing import List, Dict, Any # ====== PARAMÈTRES À ADAPTER ====== BASE = "https://votre-domaine.odoo.com/jsonrpc" DB = "VOTRE_DB" USER = "user@example.com" PWD = "VOTRE_MDP" # Fichiers de sortie JSON_PATH = "hierarchy.json" CSV_PATH = "hierarchy.csv" # Taille de lot pour read() (évite limites par défaut) BATCH_SIZE = 200 # ================================== def rpc(service: str, method: str, *args): payload = {"jsonrpc": "2.0", "method": "call", "id": 1, "params": {"service": service, "method": method, "args": list(args)}} r = requests.post(BASE, json=payload, timeout=90) r.raise_for_status() data = r.json() if "error" in data: raise RuntimeError(data["error"]) return data["result"] def login(db: str, user: str, pwd: str) -> int: uid = rpc("common", "login", db, user, pwd) if not isinstance(uid, int): raise RuntimeError("Échec de connexion (uid invalide).") return uid def call(uid: int, model: str, method: str, *args, **kwargs): full_args = [DB, uid, PWD, model, method] + list(args) if kwargs: full_args.append(kwargs) return rpc("object", "execute_kw", *full_args) def search_all_ids(uid: int, model: str, domain: list, order: str = None) -> List[int]: """Retourne tous les IDs (search) en respectant un éventuel order.""" # search retourne déjà dans l'ordre si "order" est fourni via kwargs ids = call(uid, model, "search", domain, {"order": order} if order else {}) return ids def read_in_batches(uid: int, model: str, ids: List[int], fields: List[str]) -> List[Dict[str, Any]]: """read() en lots pour éviter limites et grosses réponses.""" results = [] for i in range(0, len(ids), BATCH_SIZE): chunk = ids[i:i+BATCH_SIZE] records = call(uid, model, "read", chunk, fields) results.extend(records) # petite pause de confort si beaucoup de données if len(ids) > 2000: time.sleep(0.05) return results def get_channels(uid: int) -> List[Dict[str, Any]]: ch_fields = ["id","name","category","website_published","sequence","description","website_url"] ch_ids = search_all_ids(uid, "slide.channel", [], order="sequence asc") return read_in_batches(uid, "slide.channel", ch_ids, ch_fields) def get_sections(uid: int, channel_id: int) -> List[Dict[str, Any]]: sec_domain = [["channel_id","=",channel_id],["is_category","=",True]] sec_fields = ["id","name","sequence","is_category","channel_id"] sec_ids = search_all_ids(uid, "slide.slide", sec_domain, order="sequence asc") return read_in_batches(uid, "slide.slide", sec_ids, sec_fields) def get_contents_for_section(uid: int, channel_id: int, section_id: int) -> List[Dict[str, Any]]: domain = [["channel_id","=",channel_id],["is_category","=",False],["category_id","=",section_id]] return get_contents(uid, domain) def get_root_contents(uid: int, channel_id: int) -> List[Dict[str, Any]]: domain = [["channel_id","=",channel_id],["is_category","=",False],["category_id","=",False]] return get_contents(uid, domain) def get_contents(uid: int, domain: list) -> List[Dict[str, Any]]: fields = [ "id","name","sequence","is_category","category_id","channel_id", "slide_type","website_published","url","website_url","mime_type","completion_time" ] ids = search_all_ids(uid, "slide.slide", domain, order="sequence asc") return read_in_batches(uid, "slide.slide", ids, fields) def build_hierarchy(uid: int) -> Dict[str, Any]: hierarchy = {"channels": []} channels = get_channels(uid) for ch in channels: ch_entry = { "id": ch["id"], "name": ch.get("name"), "category": ch.get("category"), "website_published": ch.get("website_published"), "sequence": ch.get("sequence"), "description": (ch.get("description") or "").strip(), "website_url": ch.get("website_url"), "root_contents": [], "sections": [] } # Sections (chapitres) sections = get_sections(uid, ch["id"]) # Contenus par section for sec in sections: items = get_contents_for_section(uid, ch["id"], sec["id"]) ch_entry["sections"].append({ "id": sec["id"], "name": sec.get("name"), "sequence": sec.get("sequence"), "items": items }) # Contenus au niveau racine (sans section) ch_entry["root_contents"] = get_root_contents(uid, ch["id"]) hierarchy["channels"].append(ch_entry) return hierarchy def write_json(hierarchy: Dict[str, Any], path: str): with open(path, "w", encoding="utf-8") as f: json.dump(hierarchy, f, ensure_ascii=False, indent=2) def write_csv(hierarchy: Dict[str, Any], path: str): """ Un CSV plat avec : - record_type: 'section' ou 'content' - channel_* : infos cours - section_* : infos section (si applicable) - slide_* : infos contenu (si applicable) """ cols = [ "record_type", "channel_id","channel_name","channel_category","channel_published","channel_sequence","channel_url", "section_id","section_name","section_sequence", "slide_id","slide_name","slide_type","slide_sequence","slide_published","slide_url","slide_mime","slide_completion_time" ] with open(path, "w", newline="", encoding="utf-8") as f: w = csv.DictWriter(f, fieldnames=cols) w.writeheader() for ch in hierarchy["channels"]: base = { "channel_id": ch["id"], "channel_name": ch.get("name"), "channel_category": ch.get("category"), "channel_published": ch.get("website_published"), "channel_sequence": ch.get("sequence"), "channel_url": ch.get("website_url"), } # 1) Lignes "section" for sec in ch["sections"]: row = { **base, "record_type": "section", "section_id": sec["id"], "section_name": sec.get("name"), "section_sequence": sec.get("sequence"), } w.writerow(row) # 2) Lignes "content" pour chaque section for it in sec["items"]: rowc = { **base, "record_type": "content", "section_id": sec["id"], "section_name": sec.get("name"), "section_sequence": sec.get("sequence"), "slide_id": it["id"], "slide_name": it.get("name"), "slide_type": it.get("slide_type"), "slide_sequence": it.get("sequence"), "slide_published": it.get("website_published"), "slide_url": it.get("website_url") or it.get("url"), "slide_mime": it.get("mime_type"), "slide_completion_time": it.get("completion_time"), } w.writerow(rowc) # 3) Lignes "content" au niveau racine (sans section) for it in ch["root_contents"]: rowr = { **base, "record_type": "content", "slide_id": it["id"], "slide_name": it.get("name"), "slide_type": it.get("slide_type"), "slide_sequence": it.get("sequence"), "slide_published": it.get("website_published"), "slide_url": it.get("website_url") or it.get("url"), "slide_mime": it.get("mime_type"), "slide_completion_time": it.get("completion_time"), } w.writerow(rowr) def main(): try: uid = login(DB, USER, PWD) hierarchy = build_hierarchy(uid) write_json(hierarchy, JSON_PATH) write_csv(hierarchy, CSV_PATH) print(f"OK ✅ Export terminé.\n- JSON : {JSON_PATH}\n- CSV : {CSV_PATH}") except Exception as e: print(f"Erreur ❌ : {e}", file=sys.stderr) sys.exit(1) if __name__ == "__main__": main()
Ce que produit l’export
- hierarchy.json (extrait) :
{ "channels": [ { "id": 42, "name": "Initiation Solaire", "website_url": "/courses/initiation-solaire", "root_contents": [ { "id": 501, "name": "Introduction", ... } ], "sections": [ { "id": 314, "name": "Sécurité & normes", "sequence": 10, "items": [ { "id": 701, "name": "EPI obligatoires", "slide_type": "pdf", ... }, { "id": 702, "name": "Normes DC/AC", "slide_type": "video", ... } ] } ] } ] }
- hierarchy.csv : un tableau à plat
record_type,channel_id,channel_name,channel_category,channel_published,channel_sequence,channel_url,section_id,section_name,section_sequence,slide_id,slide_name,slide_type,slide_sequence,slide_published,slide_url,slide_mime,slide_completion_time section,42,Initiation Solaire,,True,1,/courses/initiation-solaire,314,Sécurité & normes,10,,,,,,,,, content,42,Initiation Solaire,,True,1,/courses/initiation-solaire,314,Sécurité & normes,10,701,EPI obligatoires,pdf,1,True,/slides/701,application/pdf,8 content,42,Initiation Solaire,,True,1,/courses/initiation-solaire,,,,,501,Introduction,document,0,True,/slides/501,text/html,5
Options utiles (à activer si besoin)
- Filtrer les cours (ex. seulement publiés) : remplacez le domaine [] par [('website_published','=',True)] dans get_channels.
- Champs supplémentaires : ajoutez dans get_contents (ex. quiz_nb_attempts, question_count, etc.) ou dans get_channels (tag_ids, enroll, user_id…).
- Performance : augmentez BATCH_SIZE si votre instance est rapide ; baissez-le si vous avez des timeouts.
- Tri : l’ordre final correspond à sequence asc pour cours → sections → contenus.
variante Node.js
Voici une variante Node.js (CLI) prête à l’emploi pour exporter la hiérarchie complète (Cours → Sections → Contenus) depuis Odoo (website_slides) :
- Produit hierarchy.json (structuré)
- Produit hierarchy.csv (table à plat, exploitable sous Excel)
1) Installation rapide
mkdir odoo-elearning-export && cd $_ npm init -y npm i axios commander
Créez le fichier odoo-elearning-export.mjs ci-dessous.
2) Script CLI (Node.js / ESM)
#!/usr/bin/env node import fs from "node:fs"; import { Command } from "commander"; import axios from "axios"; const program = new Command(); program .name("odoo-elearning-export") .description("Export Odoo eLearning hierarchy (slide.channel → slide.slide)") .requiredOption("--base <url>", "Base URL (ex: https://votre-domaine.odoo.com/jsonrpc)") .requiredOption("--db <name>", "Odoo database") .requiredOption("--user <email>", "Odoo login") .requiredOption("--pwd <password>", "Odoo password") .option("--json <path>", "JSON output path", "hierarchy.json") .option("--csv <path>", "CSV output path", "hierarchy.csv") .option("--published-only", "Only export published channels", false) .option("--batch <n>", "Batch size for read()", "200") .parse(process.argv); const OPT = program.opts(); const BASE = OPT.base; const DB = OPT.db; const USER = OPT.user; const PWD = OPT.pwd; const JSON_PATH = OPT.json; const CSV_PATH = OPT.csv; const ONLY_PUBLISHED = !!OPT.publishedOnly; const BATCH_SIZE = Math.max(1, parseInt(OPT.batch, 10) || 200); async function rpc(service, method, ...args) { const payload = { jsonrpc: "2.0", method: "call", id: Date.now(), params: { service, method, args } }; const { data } = await axios.post(BASE, payload, { timeout: 120000 }); if (data?.error) throw new Error(JSON.stringify(data.error)); return data.result; } async function login() { const uid = await rpc("common", "login", DB, USER, PWD); if (typeof uid !== "number") throw new Error("Login failed: uid invalid"); return uid; } async function call(uid, model, method, ...args) { return rpc("object", "execute_kw", DB, uid, PWD, model, method, ...args); } async function searchAllIds(uid, model, domain, order) { const kwargs = {}; if (order) kwargs.order = order; return call(uid, model, "search", [domain], kwargs); } async function readInBatches(uid, model, ids, fields) { const out = []; for (let i = 0; i < ids.length; i += BATCH_SIZE) { const chunk = ids.slice(i, i + BATCH_SIZE); const recs = await call(uid, model, "read", [chunk, fields]); out.push(...recs); // légère pause si très gros volumes if (ids.length > 2000) await new Promise(r => setTimeout(r, 50)); } return out; } async function getChannels(uid) { const fields = ["id","name","category","website_published","sequence","description","website_url"]; const domain = ONLY_PUBLISHED ? [["website_published","=",true]] : []; const ids = await searchAllIds(uid, "slide.channel", domain, "sequence asc"); return readInBatches(uid, "slide.channel", ids, fields); } async function getSections(uid, channelId) { const fields = ["id","name","sequence","is_category","channel_id"]; const domain = [["channel_id","=",channelId],["is_category","=",true]]; const ids = await searchAllIds(uid, "slide.slide", domain, "sequence asc"); return readInBatches(uid, "slide.slide", ids, fields); } async function getContents(uid, domain) { const fields = [ "id","name","sequence","is_category","category_id","channel_id", "slide_type","website_published","url","website_url","mime_type","completion_time" ]; const ids = await searchAllIds(uid, "slide.slide", domain, "sequence asc"); return readInBatches(uid, "slide.slide", ids, fields); } async function getContentsForSection(uid, channelId, sectionId) { return getContents(uid, [["channel_id","=",channelId],["is_category","=",false],["category_id","=",sectionId]]); } async function getRootContents(uid, channelId) { return getContents(uid, [["channel_id","=",channelId],["is_category","=",false],["category_id","=",false]]); } function csvEscape(v) { if (v === null || v === undefined) return ""; const s = String(v); return /[",\n]/.test(s) ? `"${s.replace(/"/g, '""')}"` : s; } function writeCSV(hierarchy, path) { const cols = [ "record_type", "channel_id","channel_name","channel_category","channel_published","channel_sequence","channel_url", "section_id","section_name","section_sequence", "slide_id","slide_name","slide_type","slide_sequence","slide_published","slide_url","slide_mime","slide_completion_time" ]; const lines = []; lines.push(cols.join(",")); for (const ch of hierarchy.channels) { const base = { channel_id: ch.id, channel_name: ch.name ?? "", channel_category: ch.category ?? "", channel_published: ch.website_published ?? "", channel_sequence: ch.sequence ?? "", channel_url: ch.website_url ?? "", }; // sections for (const sec of ch.sections) { const row = { record_type: "section", ...base, section_id: sec.id, section_name: sec.name ?? "", section_sequence: sec.sequence ?? "", }; lines.push(cols.map(k => csvEscape(row[k] ?? "")).join(",")); // contents in section for (const it of sec.items) { const rowc = { record_type: "content", ...base, section_id: sec.id, section_name: sec.name ?? "", section_sequence: sec.sequence ?? "", slide_id: it.id, slide_name: it.name ?? "", slide_type: it.slide_type ?? "", slide_sequence: it.sequence ?? "", slide_published: it.website_published ?? "", slide_url: it.website_url ?? it.url ?? "", slide_mime: it.mime_type ?? "", slide_completion_time: it.completion_time ?? "", }; lines.push(cols.map(k => csvEscape(rowc[k] ?? "")).join(",")); } } // root contents for (const it of ch.root_contents) { const rowr = { record_type: "content", ...base, slide_id: it.id, slide_name: it.name ?? "", slide_type: it.slide_type ?? "", slide_sequence: it.sequence ?? "", slide_published: it.website_published ?? "", slide_url: it.website_url ?? it.url ?? "", slide_mime: it.mime_type ?? "", slide_completion_time: it.completion_time ?? "", }; lines.push(cols.map(k => csvEscape(rowr[k] ?? "")).join(",")); } } fs.writeFileSync(path, lines.join("\n"), "utf-8"); } async function main() { try { const uid = await login(); const channels = await getChannels(uid); const hierarchy = { channels: [] }; for (const ch of channels) { const entry = { id: ch.id, name: ch.name ?? "", category: ch.category ?? "", website_published: ch.website_published ?? false, sequence: ch.sequence ?? 0, description: (ch.description || "").trim(), website_url: ch.website_url ?? "", root_contents: [], sections: [] }; const sections = await getSections(uid, ch.id); for (const sec of sections) { const items = await getContentsForSection(uid, ch.id, sec.id); entry.sections.push({ id: sec.id, name: sec.name ?? "", sequence: sec.sequence ?? 0, items }); } entry.root_contents = await getRootContents(uid, ch.id); hierarchy.channels.push(entry); } fs.writeFileSync(JSON_PATH, JSON.stringify(hierarchy, null, 2), "utf-8"); writeCSV(hierarchy, CSV_PATH); console.log("OK ✅ Export terminé."); console.log(`- JSON : ${JSON_PATH}`); console.log(`- CSV : ${CSV_PATH}`); } catch (err) { console.error("Erreur ❌", err?.message || err); process.exit(1); } } main();
Rendez le script exécutable (facultatif) :
chmod +x odoo-elearning-export.mjs
3) Exemples d’utilisation
- Export complet (tous les cours) :
node odoo-elearning-export.mjs \ --base https://votre-domaine.odoo.com/jsonrpc \ --db VOTRE_DB \ --user user@example.com \ --pwd "VOTRE_MDP"
- Seulement les cours publiés + lot de lecture de 500 + chemins personnalisés :
node odoo-elearning-export.mjs \ --base https://votre-domaine.odoo.com/jsonrpc \ --db VOTRE_DB \ --user user@example.com \ --pwd "VOTRE_MDP" \ --published-only \ --batch 500 \ --json cours_hierarchy.json \ --csv cours_hierarchy.csv
4) Notes et options
- Le script utilise l’API JSON-RPC standard : /jsonrpc → common.login puis object.execute_kw.
-
Hiérarchie basée sur slide.slide avec :
- is_category = True → section (chapitre)
- is_category = False + category_id = <id_section> → contenu de la section
- is_category = False + category_id = False → contenu racine du cours
- Ordre respecté via sequence asc à tous les niveaux.
- Champs supplémentaires faciles à ajouter dans getContents / getChannels (ex. quiz).
- Pour un proxy ou des certifs internes, vous pouvez configurer Axios (agent HTTPS).