修改skill目录
This commit is contained in:
parent
2911c67771
commit
e217fe3403
@ -1,218 +0,0 @@
|
||||
---
|
||||
name: auto-daily-summary
|
||||
description: Generate recurring summaries, daily reports, content digests, and concise action-oriented briefs from multiple inputs. Use when the user asks for daily summaries, periodic briefings, meeting digests, content condensation, or automated recurring report generation. 中文触发词包括:日报、周报、摘要、会议纪要、内容浓缩、自动汇总、每天发我一份总结。
|
||||
---
|
||||
|
||||
# Auto Daily Summary
|
||||
|
||||
## Overview
|
||||
|
||||
This skill converts scattered information into concise, structured summaries for recurring or one-off use.
|
||||
|
||||
Typical scenarios:
|
||||
- daily or weekly report generation
|
||||
- long content condensation
|
||||
- meeting or conversation summary
|
||||
- multi-source digest
|
||||
- action-item extraction
|
||||
|
||||
This skill focuses on **organization and summarization**, not source retrieval itself.
|
||||
|
||||
## Quick Start
|
||||
|
||||
When the user asks for a summary or report:
|
||||
|
||||
1. Identify the sources to summarize
|
||||
2. Clarify the audience and desired level of detail
|
||||
3. Determine whether the output is one-time or recurring
|
||||
4. Summarize by theme, not by raw chronological dump unless requested
|
||||
5. Extract action items and watch items when useful
|
||||
|
||||
### 中文任务映射
|
||||
- “帮我整理成日报” → `daily_report`
|
||||
- “做个周报/周总结” → `digest` 或 `daily_report`
|
||||
- “把这段会议内容整理一下” → `meeting_digest`
|
||||
- “浓缩成 3-5 条重点” → `digest` + `short`
|
||||
- “每天早上发我一份总结” → `plan-recurring` + `schedule-job`
|
||||
|
||||
|
||||
## Input Requirements
|
||||
|
||||
| Field | Required | Description |
|
||||
|-------|----------|-------------|
|
||||
| source content | yes | Text, notes, messages, links, reports, logs, or mixed content |
|
||||
| summary objective | yes | Inform, decide, archive, handoff, or monitor |
|
||||
| audience | no | Self, team, manager, executive, customer |
|
||||
| time scope | no | Today, this week, meeting duration, selected period |
|
||||
| desired length | no | TL;DR, short, standard, detailed |
|
||||
| output style | no | Daily report, digest, executive summary, bullet list |
|
||||
| action extraction | no | Whether to extract todos, risks, blockers |
|
||||
|
||||
## Workflow Decision Tree
|
||||
|
||||
### Content Summary
|
||||
Use when the user wants a concise summary of a long input.
|
||||
|
||||
### Daily / Weekly Report
|
||||
Use when the user wants a periodic report with sections and status updates.
|
||||
|
||||
### Meeting Digest
|
||||
Use when the user wants decisions, action items, and blockers from a discussion.
|
||||
|
||||
### Recurring Summary Workflow
|
||||
Use when the user wants this to happen on a schedule. In that case, pair with `schedule-job`.
|
||||
|
||||
## Instructions
|
||||
|
||||
### Step 1: Identify source boundaries
|
||||
Clarify what should and should not be included in the summary.
|
||||
|
||||
### Step 2: Determine the correct abstraction level
|
||||
Choose the right level for the audience:
|
||||
- executive audience -> implications and decisions
|
||||
- working team -> concrete tasks and blockers
|
||||
- archive -> structured factual recap
|
||||
|
||||
### Step 3: Group by theme
|
||||
Prefer grouping by:
|
||||
- progress
|
||||
- decisions
|
||||
- blockers
|
||||
- risks
|
||||
- next steps
|
||||
|
||||
Avoid copying source order unless chronology itself matters.
|
||||
|
||||
### Step 4: Extract action items
|
||||
When appropriate, identify:
|
||||
- owner
|
||||
- task
|
||||
- due timing
|
||||
- dependency or blocker
|
||||
|
||||
If ownership is unclear, say so.
|
||||
|
||||
### Step 5: Prepare for automation if needed
|
||||
If the user wants recurring output:
|
||||
- use `schedule-job` for cadence
|
||||
- use `imap-smtp-email` or other enabled notification skills for delivery
|
||||
|
||||
## Scripts
|
||||
|
||||
### CLI Usage
|
||||
|
||||
Use the following commands when you need stable structured outputs:
|
||||
|
||||
```bash
|
||||
poetry run python skills/auto-daily-summary/scripts/summary_cli.py validate --input-json '<JSON>'
|
||||
poetry run python skills/auto-daily-summary/scripts/summary_cli.py run --input-json '<JSON>' --output json
|
||||
poetry run python skills/auto-daily-summary/scripts/summary_cli.py plan-recurring --input-json '<JSON>'
|
||||
```
|
||||
|
||||
### Recommended Uses
|
||||
- `validate` - check whether the summary request payload is complete
|
||||
- `run` - generate summary JSON and markdown
|
||||
- `plan-recurring` - generate a schedule-ready message payload for `schedule-job`
|
||||
|
||||
|
||||
### Daily Report
|
||||
```markdown
|
||||
# Daily Report
|
||||
|
||||
## Summary
|
||||
[Short summary]
|
||||
|
||||
## Key Updates
|
||||
- [Update]
|
||||
|
||||
## Decisions
|
||||
- [Decision]
|
||||
|
||||
## Risks / Blockers
|
||||
- [Risk or blocker]
|
||||
|
||||
## Next Actions
|
||||
- [Action]
|
||||
```
|
||||
|
||||
### Content Digest
|
||||
```markdown
|
||||
# Content Digest
|
||||
|
||||
## TL;DR
|
||||
[Very short summary]
|
||||
|
||||
## Main Themes
|
||||
### 1. [Theme]
|
||||
- [Key point]
|
||||
|
||||
## Notable Details
|
||||
- [Detail]
|
||||
|
||||
## Follow-up
|
||||
- [Suggested follow-up]
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before finalizing, verify:
|
||||
- the summary matches the audience level
|
||||
- repetition and noise are removed
|
||||
- key decisions are not buried
|
||||
- action items are explicit when relevant
|
||||
- uncertainty is preserved rather than flattened away
|
||||
- the result is shorter and clearer than the source material
|
||||
|
||||
## Fallback Strategy
|
||||
|
||||
If the input is too fragmented:
|
||||
- produce a partial summary by theme
|
||||
- list gaps or unclear areas
|
||||
- ask for additional source material only if needed for the user’s stated goal
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `skills/schedule-job/SKILL.md` - automate recurring execution
|
||||
- `skills/imap-smtp-email/SKILL.md` - send summaries via email
|
||||
- `skills/market-academic-insight/SKILL.md` - use when the task is deeper research synthesis rather than pure summarization
|
||||
- `skills/competitor-news-intel/SKILL.md` - use when competitor monitoring and intelligence is the real task
|
||||
|
||||
## Examples
|
||||
|
||||
**User**: "帮我把今天的工作内容整理成日报"
|
||||
|
||||
Expected output:
|
||||
- summary
|
||||
- key updates
|
||||
- blockers
|
||||
- next actions
|
||||
|
||||
**User**: "把这篇长文浓缩成 5 条重点"
|
||||
|
||||
Expected output:
|
||||
- TL;DR
|
||||
- 5 concise points
|
||||
- optional follow-up note
|
||||
|
||||
**User**: "每天早上自动给我发新闻摘要"
|
||||
|
||||
Expected output:
|
||||
- summary format definition
|
||||
- recommendation to combine with `schedule-job`
|
||||
- delivery method confirmation
|
||||
|
||||
**User**: "把这段会议记录整理成会议纪要"
|
||||
|
||||
Expected output:
|
||||
- summary
|
||||
- decisions
|
||||
- action items
|
||||
- blockers if any
|
||||
|
||||
**User**: "给我做个今天的三段式总结"
|
||||
|
||||
Expected output:
|
||||
- summary
|
||||
- key updates
|
||||
- next actions
|
||||
|
||||
@ -1,164 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from datetime import datetime, UTC
|
||||
|
||||
from summary_core import build_summary, validate_payload
|
||||
|
||||
|
||||
ERROR_TEMPLATE = {
|
||||
"success": False,
|
||||
"code": "invalid_input",
|
||||
"message": "",
|
||||
"data": {},
|
||||
"meta": {},
|
||||
"errors": [],
|
||||
}
|
||||
|
||||
|
||||
def _now_iso() -> str:
|
||||
return datetime.now(UTC).isoformat()
|
||||
|
||||
|
||||
def _emit_json(data: dict, pretty: bool, stream=None):
|
||||
print(json.dumps(data, ensure_ascii=False, indent=2 if pretty else None), file=stream or sys.stdout)
|
||||
|
||||
|
||||
def _error_response(code: str, message: str, errors: list[str] | None = None) -> dict:
|
||||
return {
|
||||
**ERROR_TEMPLATE,
|
||||
"code": code,
|
||||
"message": message,
|
||||
"meta": {"generated_at": _now_iso()},
|
||||
"errors": errors or [],
|
||||
}
|
||||
|
||||
|
||||
def _parse_bool(value: str | None) -> bool | None:
|
||||
if value is None:
|
||||
return None
|
||||
lowered = value.lower()
|
||||
if lowered in {"1", "true", "yes", "y"}:
|
||||
return True
|
||||
if lowered in {"0", "false", "no", "n"}:
|
||||
return False
|
||||
raise ValueError(f"invalid boolean value: {value}")
|
||||
|
||||
|
||||
def _load_payload(raw: str) -> dict:
|
||||
return json.loads(raw)
|
||||
|
||||
|
||||
def _apply_overrides(payload: dict, args: argparse.Namespace) -> dict:
|
||||
payload.setdefault("data", {})
|
||||
if args.lang:
|
||||
payload["language"] = args.lang
|
||||
if args.style:
|
||||
payload["data"]["style"] = args.style
|
||||
if args.length:
|
||||
payload["data"]["length"] = args.length
|
||||
if hasattr(args, "extract_actions"):
|
||||
extract_actions = _parse_bool(getattr(args, "extract_actions", None))
|
||||
if extract_actions is not None:
|
||||
payload["data"]["extract_actions"] = extract_actions
|
||||
if hasattr(args, "extract_risks"):
|
||||
extract_risks = _parse_bool(getattr(args, "extract_risks", None))
|
||||
if extract_risks is not None:
|
||||
payload["data"]["extract_risks"] = extract_risks
|
||||
return payload
|
||||
|
||||
|
||||
def cmd_validate(args: argparse.Namespace):
|
||||
payload = _apply_overrides(_load_payload(args.input_json), args)
|
||||
errors = validate_payload(payload)
|
||||
result = {
|
||||
"success": not errors,
|
||||
"code": "ok" if not errors else "invalid_input",
|
||||
"message": "payload valid" if not errors else "payload invalid",
|
||||
"data": {"valid": not errors},
|
||||
"meta": {"generated_at": _now_iso()},
|
||||
"errors": errors,
|
||||
}
|
||||
target_stream = sys.stdout if not errors else sys.stderr
|
||||
_emit_json(result, args.pretty, target_stream)
|
||||
if errors:
|
||||
raise SystemExit(1)
|
||||
|
||||
|
||||
def cmd_run(args: argparse.Namespace):
|
||||
payload = _apply_overrides(_load_payload(args.input_json), args)
|
||||
errors = validate_payload(payload)
|
||||
if errors:
|
||||
_emit_json(_error_response("invalid_input", "payload invalid", errors), args.pretty, sys.stderr)
|
||||
raise SystemExit(1)
|
||||
data = build_summary(payload)
|
||||
if args.output == "markdown":
|
||||
print(data["markdown"])
|
||||
return
|
||||
result = {
|
||||
"success": True,
|
||||
"code": "ok",
|
||||
"message": "summary generated",
|
||||
"data": data,
|
||||
"meta": {
|
||||
"generated_at": _now_iso(),
|
||||
"source_count": len(payload.get("data", {}).get("sources", [])),
|
||||
},
|
||||
"errors": [],
|
||||
}
|
||||
_emit_json(result, args.pretty)
|
||||
|
||||
|
||||
def cmd_plan_recurring(args: argparse.Namespace):
|
||||
payload = _apply_overrides(_load_payload(args.input_json), args)
|
||||
errors = validate_payload(payload)
|
||||
if errors:
|
||||
_emit_json(_error_response("invalid_input", "payload invalid", errors), args.pretty, sys.stderr)
|
||||
raise SystemExit(1)
|
||||
data = build_summary(payload)
|
||||
result = {
|
||||
"success": True,
|
||||
"code": "ok",
|
||||
"message": "recurring plan generated",
|
||||
"data": {"schedule_payload": data["schedule_payload"]},
|
||||
"meta": {"generated_at": _now_iso()},
|
||||
"errors": [],
|
||||
}
|
||||
_emit_json(result, args.pretty)
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(description="Generate structured summaries")
|
||||
subparsers = parser.add_subparsers(dest="command", required=True)
|
||||
for name in ["validate", "run", "plan-recurring"]:
|
||||
sub = subparsers.add_parser(name)
|
||||
sub.add_argument("--input-json", required=True)
|
||||
sub.add_argument("--lang")
|
||||
sub.add_argument("--style")
|
||||
sub.add_argument("--length")
|
||||
sub.add_argument("--extract-actions")
|
||||
sub.add_argument("--extract-risks")
|
||||
sub.add_argument("--pretty", action="store_true")
|
||||
if name == "run":
|
||||
sub.add_argument("--output", choices=["json", "markdown"], default="json")
|
||||
subparsers.choices["validate"].set_defaults(func=cmd_validate)
|
||||
subparsers.choices["run"].set_defaults(func=cmd_run)
|
||||
subparsers.choices["plan-recurring"].set_defaults(func=cmd_plan_recurring)
|
||||
return parser
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = build_parser()
|
||||
args = parser.parse_args()
|
||||
try:
|
||||
args.func(args)
|
||||
except json.JSONDecodeError as exc:
|
||||
_emit_json(_error_response("invalid_input", f"invalid json: {exc}", [str(exc)]), getattr(args, "pretty", False), sys.stderr)
|
||||
raise SystemExit(1)
|
||||
except ValueError as exc:
|
||||
_emit_json(_error_response("invalid_input", str(exc), [str(exc)]), getattr(args, "pretty", False), sys.stderr)
|
||||
raise SystemExit(1)
|
||||
except Exception as exc:
|
||||
_emit_json(_error_response("internal_error", "unexpected error", [str(exc)]), getattr(args, "pretty", False), sys.stderr)
|
||||
raise SystemExit(1)
|
||||
@ -1,228 +0,0 @@
|
||||
import re
|
||||
from collections import Counter
|
||||
from typing import Any
|
||||
|
||||
SUMMARY_LENGTH_LIMITS = {
|
||||
"tldr": 2,
|
||||
"short": 4,
|
||||
"standard": 6,
|
||||
"detailed": 10,
|
||||
}
|
||||
|
||||
ACTION_PREFIX_PATTERNS = [
|
||||
r"^(?:TODO|待办|action)[::]?\s*(.+)$",
|
||||
r"^(?:需要|跟进)\s*(.+)$",
|
||||
]
|
||||
|
||||
RISK_KEYWORDS = {
|
||||
"high": ["阻塞", "blocker", "故障", "失败", "严重", "不可用"],
|
||||
"medium": ["风险", "延迟", "异常", "超时", "报错"],
|
||||
"low": ["提醒", "注意", "观察", "待确认"],
|
||||
}
|
||||
|
||||
BOOL_FIELDS = ["extract_actions", "extract_risks"]
|
||||
VALID_STYLES = {"daily_report", "digest", "meeting_digest", "executive"}
|
||||
VALID_LENGTHS = {"tldr", "short", "standard", "detailed"}
|
||||
|
||||
|
||||
def _clean_text(text: str) -> str:
|
||||
return re.sub(r"\s+", " ", (text or "").strip())
|
||||
|
||||
|
||||
def _normalize_key(text: str) -> str:
|
||||
text = _clean_text(text).lower()
|
||||
return re.sub(r"[^\w\u4e00-\u9fff]+", "", text)
|
||||
|
||||
|
||||
def _split_sentences(text: str) -> list[str]:
|
||||
raw_parts = re.split(r"[。!?;.!?;]+|\n+", text)
|
||||
return [_clean_text(part) for part in raw_parts if _clean_text(part)]
|
||||
|
||||
|
||||
def _sentence_tokens(sentence: str) -> list[str]:
|
||||
return re.findall(r"[A-Za-z0-9_-]+|[\u4e00-\u9fff]{2,}", sentence.lower())
|
||||
|
||||
|
||||
def _top_sentences(texts: list[str], limit: int) -> list[str]:
|
||||
sentences: list[str] = []
|
||||
for text in texts:
|
||||
sentences.extend(_split_sentences(text))
|
||||
|
||||
if not sentences:
|
||||
return []
|
||||
|
||||
token_counter = Counter()
|
||||
for sentence in sentences:
|
||||
token_counter.update(_sentence_tokens(sentence))
|
||||
|
||||
scored: list[tuple[int, int, str]] = []
|
||||
for index, sentence in enumerate(sentences):
|
||||
score = sum(token_counter[token] for token in _sentence_tokens(sentence))
|
||||
scored.append((score, -index, sentence))
|
||||
|
||||
ranked = [sentence for _, _, sentence in sorted(scored, reverse=True)]
|
||||
unique_ranked = []
|
||||
seen = set()
|
||||
for sentence in ranked:
|
||||
key = _normalize_key(sentence)
|
||||
if key and key not in seen:
|
||||
seen.add(key)
|
||||
unique_ranked.append(sentence)
|
||||
if len(unique_ranked) >= limit:
|
||||
break
|
||||
return unique_ranked
|
||||
|
||||
|
||||
def _trim_fragment(text: str, max_length: int = 80) -> str:
|
||||
fragment = re.split(r"[,,。;;!?]", text, maxsplit=1)[0]
|
||||
fragment = _clean_text(fragment)
|
||||
return fragment[:max_length].strip()
|
||||
|
||||
|
||||
def _extract_actions(texts: list[str]) -> list[dict[str, Any]]:
|
||||
items: list[dict[str, Any]] = []
|
||||
for text in texts:
|
||||
for sentence in _split_sentences(text):
|
||||
for pattern in ACTION_PREFIX_PATTERNS:
|
||||
match = re.match(pattern, sentence, re.IGNORECASE)
|
||||
if not match:
|
||||
continue
|
||||
task = _trim_fragment(match.group(1))
|
||||
if len(task) < 2:
|
||||
continue
|
||||
items.append({"task": task, "owner": None, "due_at": None, "blocker": None})
|
||||
break
|
||||
dedup = []
|
||||
seen = set()
|
||||
for item in items:
|
||||
key = _normalize_key(item["task"])
|
||||
if key and key not in seen:
|
||||
seen.add(key)
|
||||
dedup.append(item)
|
||||
return dedup[:10]
|
||||
|
||||
|
||||
def _extract_risks(texts: list[str]) -> list[dict[str, Any]]:
|
||||
risks: list[dict[str, Any]] = []
|
||||
for text in texts:
|
||||
for sentence in _split_sentences(text):
|
||||
lowered = sentence.lower()
|
||||
for impact, keywords in RISK_KEYWORDS.items():
|
||||
matched = next((keyword for keyword in keywords if keyword.lower() in lowered), None)
|
||||
if not matched:
|
||||
continue
|
||||
start = max(0, lowered.find(matched.lower()) - 18)
|
||||
end = min(len(sentence), lowered.find(matched.lower()) + len(matched) + 30)
|
||||
fragment = _clean_text(sentence[start:end])
|
||||
fragment = fragment[:120]
|
||||
if len(fragment) < 2:
|
||||
continue
|
||||
risks.append({"risk": fragment, "impact": impact, "mitigation": None})
|
||||
break
|
||||
dedup = []
|
||||
seen = set()
|
||||
for item in risks:
|
||||
key = _normalize_key(item["risk"])
|
||||
if key and key not in seen:
|
||||
seen.add(key)
|
||||
dedup.append(item)
|
||||
return dedup[:10]
|
||||
|
||||
|
||||
def _build_summary_line(sentences: list[str]) -> str:
|
||||
if not sentences:
|
||||
return "暂无可提炼的关键信息。"
|
||||
selected = sentences[:2]
|
||||
return ";".join(selected)
|
||||
|
||||
|
||||
def build_summary(payload: dict[str, Any]) -> dict[str, Any]:
|
||||
data = payload.get("data", {})
|
||||
sources = data.get("sources", [])
|
||||
texts = [_clean_text(source.get("content", "")) for source in sources if _clean_text(source.get("content", ""))]
|
||||
length = data.get("length", "standard")
|
||||
style = data.get("style", "daily_report")
|
||||
limit = SUMMARY_LENGTH_LIMITS.get(length, SUMMARY_LENGTH_LIMITS["standard"])
|
||||
top_sentences = _top_sentences(texts, limit)
|
||||
|
||||
summary_line = _build_summary_line(top_sentences)
|
||||
summary_keys = {_normalize_key(sentence) for sentence in top_sentences[:2]}
|
||||
detail_sentences = [sentence for sentence in top_sentences if _normalize_key(sentence) not in summary_keys]
|
||||
|
||||
sections = []
|
||||
if detail_sentences:
|
||||
if len(detail_sentences) == 1:
|
||||
sections = [{"title": "Key Updates", "bullets": detail_sentences}]
|
||||
else:
|
||||
midpoint = max(1, len(detail_sentences) // 2)
|
||||
sections = [
|
||||
{"title": "Key Updates", "bullets": detail_sentences[:midpoint]},
|
||||
{"title": "Notable Details", "bullets": detail_sentences[midpoint:]},
|
||||
]
|
||||
|
||||
action_items = _extract_actions(texts) if data.get("extract_actions") else []
|
||||
risk_items = _extract_risks(texts) if data.get("extract_risks") else []
|
||||
|
||||
markdown_lines = ["# Summary", "", "## Summary", f"- {summary_line}"]
|
||||
for section in sections:
|
||||
if not section["bullets"]:
|
||||
continue
|
||||
markdown_lines.extend(["", f"## {section['title']}"])
|
||||
markdown_lines.extend(f"- {bullet}" for bullet in section["bullets"])
|
||||
if action_items:
|
||||
markdown_lines.extend(["", "## Action Items"])
|
||||
markdown_lines.extend(f"- {item['task']}" for item in action_items)
|
||||
if risk_items:
|
||||
markdown_lines.extend(["", "## Risks"])
|
||||
markdown_lines.extend(f"- [{item['impact']}] {item['risk']}" for item in risk_items)
|
||||
|
||||
schedule_payload = {
|
||||
"suggested_name": "Daily Summary",
|
||||
"message": "[Scheduled Task Triggered] 请立即汇总最新内容并输出结构化摘要,如有行动项和风险请一并列出,然后选择合适的通知方式发送给用户。",
|
||||
}
|
||||
|
||||
return {
|
||||
"summary": summary_line,
|
||||
"sections": sections,
|
||||
"action_items": action_items,
|
||||
"risk_items": risk_items,
|
||||
"markdown": "\n".join(markdown_lines),
|
||||
"schedule_payload": schedule_payload,
|
||||
"style": style,
|
||||
}
|
||||
|
||||
|
||||
def _validate_source(source: Any, index: int) -> list[str]:
|
||||
errors = []
|
||||
if not isinstance(source, dict):
|
||||
return [f"data.sources[{index}] must be an object"]
|
||||
if not _clean_text(str(source.get("content", ""))):
|
||||
errors.append(f"data.sources[{index}].content is required")
|
||||
return errors
|
||||
|
||||
|
||||
def validate_payload(payload: dict[str, Any]) -> list[str]:
|
||||
errors = []
|
||||
data = payload.get("data")
|
||||
if not isinstance(data, dict):
|
||||
return ["data must be an object"]
|
||||
sources = data.get("sources")
|
||||
if not isinstance(sources, list) or not sources:
|
||||
errors.append("data.sources must be a non-empty array")
|
||||
else:
|
||||
for index, source in enumerate(sources):
|
||||
errors.extend(_validate_source(source, index))
|
||||
objective = data.get("objective")
|
||||
if not isinstance(objective, str) or not objective.strip():
|
||||
errors.append("data.objective is required")
|
||||
style = data.get("style")
|
||||
if style is not None and style not in VALID_STYLES:
|
||||
errors.append(f"data.style must be one of {sorted(VALID_STYLES)}")
|
||||
length = data.get("length")
|
||||
if length is not None and length not in VALID_LENGTHS:
|
||||
errors.append(f"data.length must be one of {sorted(VALID_LENGTHS)}")
|
||||
for field in BOOL_FIELDS:
|
||||
value = data.get(field)
|
||||
if value is not None and not isinstance(value, bool):
|
||||
errors.append(f"data.{field} must be a boolean")
|
||||
return errors
|
||||
Loading…
Reference in New Issue
Block a user