add skill

This commit is contained in:
朱潮 2026-04-16 10:23:54 +08:00
parent 9825a43cad
commit d9d78075aa
12 changed files with 2179 additions and 1 deletions

View File

@ -1,12 +1,14 @@
# Skill 功能
> 负责范围:技能包管理服务 - 核心实现
> 最后更新2025-02-11
> 最后更新2026-04-16
## 当前状态
Skill 系统支持两种来源:官方 skills (`./skills/`) 和用户 skills (`projects/uploads/{bot_id}/skills/`)。支持 Hook 系统和 MCP 服务器配置,通过 SKILL.md 或 plugin.json 定义元数据。
目前已新增一批**纯 `SKILL.md` 型业务 skill MVP**,用于研究、摘要、报告和情报编排,底层文件处理与外部检索能力继续复用既有 skill。
## 核心文件
- `routes/skill_manager.py` - Skill 上传/删除/列表 API
@ -18,10 +20,18 @@ Skill 系统支持两种来源:官方 skills (`./skills/`) 和用户 skills (`
## 最近重要事项
- 2026-04-16: 为 `auto-daily-summary``competitor-news-intel` 新增 Python CLI 脚本 MVP统一采用 `argparse + JSON stdout` 模式
- 2026-04-16: 新增 6 个纯 `SKILL.md` 型业务 skill`market-academic-insight`、`financial-report-generator`、`contract-document-generator`、`sales-decision-report`、`auto-daily-summary`、`competitor-news-intel`
- 2025-02-11: 初始化 skill 功能 memory
## Gotchas开发必读
- ⚠️ 纯 `SKILL.md` 型业务 skill 适合先承载 workflow、输入模板、输出模板需要稳定文件产出或自动化时再补 `scripts/`
- ⚠️ 新业务 skill 应复用既有基础能力 skill`baidu-search`、`xlsx`、`docx`、`pdf`、`schedule-job`、`imap-smtp-email`),避免重复定义底层工具能力
- ⚠️ 新增脚本优先采用 `Python + argparse + JSON stdout`,比 `argv[1] JSON` 更适合自动化链路
- ⚠️ `auto-daily-summary` 需要特别注意中文分句、action 边界截断、risk 窗口裁剪,否则容易把整句/整段吞进去
- ⚠️ `competitor-news-intel` 的 payload 校验应按命令拆分collect/analyze/run不要共用一套最小校验
- ⚠️ `competitor-news-intel``collect/run` 依赖 `BAIDU_API_KEY`;无该环境变量时应返回稳定错误 JSON不要静默降级
- ⚠️ 执行脚本必须使用绝对路径
- ⚠️ MCP 配置优先级Skill MCP > 默认 MCP > 用户参数
- ⚠️ 上传大小限制50MBZIP解压后最大 500MB

View File

@ -0,0 +1,218 @@
---
name: auto-daily-summary
description: Generate recurring summaries, daily reports, content digests, and concise action-oriented briefs from multiple inputs. Use when the user asks for daily summaries, periodic briefings, meeting digests, content condensation, or automated recurring report generation. 中文触发词包括:日报、周报、摘要、会议纪要、内容浓缩、自动汇总、每天发我一份总结。
---
# Auto Daily Summary
## Overview
This skill converts scattered information into concise, structured summaries for recurring or one-off use.
Typical scenarios:
- daily or weekly report generation
- long content condensation
- meeting or conversation summary
- multi-source digest
- action-item extraction
This skill focuses on **organization and summarization**, not source retrieval itself.
## Quick Start
When the user asks for a summary or report:
1. Identify the sources to summarize
2. Clarify the audience and desired level of detail
3. Determine whether the output is one-time or recurring
4. Summarize by theme, not by raw chronological dump unless requested
5. Extract action items and watch items when useful
### 中文任务映射
- “帮我整理成日报” → `daily_report`
- “做个周报/周总结” → `digest``daily_report`
- “把这段会议内容整理一下” → `meeting_digest`
- “浓缩成 3-5 条重点” → `digest` + `short`
- “每天早上发我一份总结” → `plan-recurring` + `schedule-job`
## Input Requirements
| Field | Required | Description |
|-------|----------|-------------|
| source content | yes | Text, notes, messages, links, reports, logs, or mixed content |
| summary objective | yes | Inform, decide, archive, handoff, or monitor |
| audience | no | Self, team, manager, executive, customer |
| time scope | no | Today, this week, meeting duration, selected period |
| desired length | no | TL;DR, short, standard, detailed |
| output style | no | Daily report, digest, executive summary, bullet list |
| action extraction | no | Whether to extract todos, risks, blockers |
## Workflow Decision Tree
### Content Summary
Use when the user wants a concise summary of a long input.
### Daily / Weekly Report
Use when the user wants a periodic report with sections and status updates.
### Meeting Digest
Use when the user wants decisions, action items, and blockers from a discussion.
### Recurring Summary Workflow
Use when the user wants this to happen on a schedule. In that case, pair with `schedule-job`.
## Instructions
### Step 1: Identify source boundaries
Clarify what should and should not be included in the summary.
### Step 2: Determine the correct abstraction level
Choose the right level for the audience:
- executive audience -> implications and decisions
- working team -> concrete tasks and blockers
- archive -> structured factual recap
### Step 3: Group by theme
Prefer grouping by:
- progress
- decisions
- blockers
- risks
- next steps
Avoid copying source order unless chronology itself matters.
### Step 4: Extract action items
When appropriate, identify:
- owner
- task
- due timing
- dependency or blocker
If ownership is unclear, say so.
### Step 5: Prepare for automation if needed
If the user wants recurring output:
- use `schedule-job` for cadence
- use `imap-smtp-email` or other enabled notification skills for delivery
## Scripts
### CLI Usage
Use the following commands when you need stable structured outputs:
```bash
poetry run python skills/auto-daily-summary/scripts/summary_cli.py validate --input-json '<JSON>'
poetry run python skills/auto-daily-summary/scripts/summary_cli.py run --input-json '<JSON>' --output json
poetry run python skills/auto-daily-summary/scripts/summary_cli.py plan-recurring --input-json '<JSON>'
```
### Recommended Uses
- `validate` - check whether the summary request payload is complete
- `run` - generate summary JSON and markdown
- `plan-recurring` - generate a schedule-ready message payload for `schedule-job`
### Daily Report
```markdown
# Daily Report
## Summary
[Short summary]
## Key Updates
- [Update]
## Decisions
- [Decision]
## Risks / Blockers
- [Risk or blocker]
## Next Actions
- [Action]
```
### Content Digest
```markdown
# Content Digest
## TL;DR
[Very short summary]
## Main Themes
### 1. [Theme]
- [Key point]
## Notable Details
- [Detail]
## Follow-up
- [Suggested follow-up]
```
## Quality Checklist
Before finalizing, verify:
- the summary matches the audience level
- repetition and noise are removed
- key decisions are not buried
- action items are explicit when relevant
- uncertainty is preserved rather than flattened away
- the result is shorter and clearer than the source material
## Fallback Strategy
If the input is too fragmented:
- produce a partial summary by theme
- list gaps or unclear areas
- ask for additional source material only if needed for the users stated goal
## Related Skills
- `skills/schedule-job/SKILL.md` - automate recurring execution
- `skills/imap-smtp-email/SKILL.md` - send summaries via email
- `skills/market-academic-insight/SKILL.md` - use when the task is deeper research synthesis rather than pure summarization
- `skills/competitor-news-intel/SKILL.md` - use when competitor monitoring and intelligence is the real task
## Examples
**User**: "帮我把今天的工作内容整理成日报"
Expected output:
- summary
- key updates
- blockers
- next actions
**User**: "把这篇长文浓缩成 5 条重点"
Expected output:
- TL;DR
- 5 concise points
- optional follow-up note
**User**: "每天早上自动给我发新闻摘要"
Expected output:
- summary format definition
- recommendation to combine with `schedule-job`
- delivery method confirmation
**User**: "把这段会议记录整理成会议纪要"
Expected output:
- summary
- decisions
- action items
- blockers if any
**User**: "给我做个今天的三段式总结"
Expected output:
- summary
- key updates
- next actions

View File

@ -0,0 +1,164 @@
#!/usr/bin/env python3
import argparse
import json
import sys
from datetime import datetime, UTC
from summary_core import build_summary, validate_payload
ERROR_TEMPLATE = {
"success": False,
"code": "invalid_input",
"message": "",
"data": {},
"meta": {},
"errors": [],
}
def _now_iso() -> str:
return datetime.now(UTC).isoformat()
def _emit_json(data: dict, pretty: bool, stream=None):
print(json.dumps(data, ensure_ascii=False, indent=2 if pretty else None), file=stream or sys.stdout)
def _error_response(code: str, message: str, errors: list[str] | None = None) -> dict:
return {
**ERROR_TEMPLATE,
"code": code,
"message": message,
"meta": {"generated_at": _now_iso()},
"errors": errors or [],
}
def _parse_bool(value: str | None) -> bool | None:
if value is None:
return None
lowered = value.lower()
if lowered in {"1", "true", "yes", "y"}:
return True
if lowered in {"0", "false", "no", "n"}:
return False
raise ValueError(f"invalid boolean value: {value}")
def _load_payload(raw: str) -> dict:
return json.loads(raw)
def _apply_overrides(payload: dict, args: argparse.Namespace) -> dict:
payload.setdefault("data", {})
if args.lang:
payload["language"] = args.lang
if args.style:
payload["data"]["style"] = args.style
if args.length:
payload["data"]["length"] = args.length
if hasattr(args, "extract_actions"):
extract_actions = _parse_bool(getattr(args, "extract_actions", None))
if extract_actions is not None:
payload["data"]["extract_actions"] = extract_actions
if hasattr(args, "extract_risks"):
extract_risks = _parse_bool(getattr(args, "extract_risks", None))
if extract_risks is not None:
payload["data"]["extract_risks"] = extract_risks
return payload
def cmd_validate(args: argparse.Namespace):
payload = _apply_overrides(_load_payload(args.input_json), args)
errors = validate_payload(payload)
result = {
"success": not errors,
"code": "ok" if not errors else "invalid_input",
"message": "payload valid" if not errors else "payload invalid",
"data": {"valid": not errors},
"meta": {"generated_at": _now_iso()},
"errors": errors,
}
target_stream = sys.stdout if not errors else sys.stderr
_emit_json(result, args.pretty, target_stream)
if errors:
raise SystemExit(1)
def cmd_run(args: argparse.Namespace):
payload = _apply_overrides(_load_payload(args.input_json), args)
errors = validate_payload(payload)
if errors:
_emit_json(_error_response("invalid_input", "payload invalid", errors), args.pretty, sys.stderr)
raise SystemExit(1)
data = build_summary(payload)
if args.output == "markdown":
print(data["markdown"])
return
result = {
"success": True,
"code": "ok",
"message": "summary generated",
"data": data,
"meta": {
"generated_at": _now_iso(),
"source_count": len(payload.get("data", {}).get("sources", [])),
},
"errors": [],
}
_emit_json(result, args.pretty)
def cmd_plan_recurring(args: argparse.Namespace):
payload = _apply_overrides(_load_payload(args.input_json), args)
errors = validate_payload(payload)
if errors:
_emit_json(_error_response("invalid_input", "payload invalid", errors), args.pretty, sys.stderr)
raise SystemExit(1)
data = build_summary(payload)
result = {
"success": True,
"code": "ok",
"message": "recurring plan generated",
"data": {"schedule_payload": data["schedule_payload"]},
"meta": {"generated_at": _now_iso()},
"errors": [],
}
_emit_json(result, args.pretty)
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description="Generate structured summaries")
subparsers = parser.add_subparsers(dest="command", required=True)
for name in ["validate", "run", "plan-recurring"]:
sub = subparsers.add_parser(name)
sub.add_argument("--input-json", required=True)
sub.add_argument("--lang")
sub.add_argument("--style")
sub.add_argument("--length")
sub.add_argument("--extract-actions")
sub.add_argument("--extract-risks")
sub.add_argument("--pretty", action="store_true")
if name == "run":
sub.add_argument("--output", choices=["json", "markdown"], default="json")
subparsers.choices["validate"].set_defaults(func=cmd_validate)
subparsers.choices["run"].set_defaults(func=cmd_run)
subparsers.choices["plan-recurring"].set_defaults(func=cmd_plan_recurring)
return parser
if __name__ == "__main__":
parser = build_parser()
args = parser.parse_args()
try:
args.func(args)
except json.JSONDecodeError as exc:
_emit_json(_error_response("invalid_input", f"invalid json: {exc}", [str(exc)]), getattr(args, "pretty", False), sys.stderr)
raise SystemExit(1)
except ValueError as exc:
_emit_json(_error_response("invalid_input", str(exc), [str(exc)]), getattr(args, "pretty", False), sys.stderr)
raise SystemExit(1)
except Exception as exc:
_emit_json(_error_response("internal_error", "unexpected error", [str(exc)]), getattr(args, "pretty", False), sys.stderr)
raise SystemExit(1)

View File

@ -0,0 +1,228 @@
import re
from collections import Counter
from typing import Any
SUMMARY_LENGTH_LIMITS = {
"tldr": 2,
"short": 4,
"standard": 6,
"detailed": 10,
}
ACTION_PREFIX_PATTERNS = [
r"^(?:TODO|待办|action)[:]?\s*(.+)$",
r"^(?:需要|跟进)\s*(.+)$",
]
RISK_KEYWORDS = {
"high": ["阻塞", "blocker", "故障", "失败", "严重", "不可用"],
"medium": ["风险", "延迟", "异常", "超时", "报错"],
"low": ["提醒", "注意", "观察", "待确认"],
}
BOOL_FIELDS = ["extract_actions", "extract_risks"]
VALID_STYLES = {"daily_report", "digest", "meeting_digest", "executive"}
VALID_LENGTHS = {"tldr", "short", "standard", "detailed"}
def _clean_text(text: str) -> str:
return re.sub(r"\s+", " ", (text or "").strip())
def _normalize_key(text: str) -> str:
text = _clean_text(text).lower()
return re.sub(r"[^\w\u4e00-\u9fff]+", "", text)
def _split_sentences(text: str) -> list[str]:
raw_parts = re.split(r"[。!?;.!?;]+|\n+", text)
return [_clean_text(part) for part in raw_parts if _clean_text(part)]
def _sentence_tokens(sentence: str) -> list[str]:
return re.findall(r"[A-Za-z0-9_-]+|[\u4e00-\u9fff]{2,}", sentence.lower())
def _top_sentences(texts: list[str], limit: int) -> list[str]:
sentences: list[str] = []
for text in texts:
sentences.extend(_split_sentences(text))
if not sentences:
return []
token_counter = Counter()
for sentence in sentences:
token_counter.update(_sentence_tokens(sentence))
scored: list[tuple[int, int, str]] = []
for index, sentence in enumerate(sentences):
score = sum(token_counter[token] for token in _sentence_tokens(sentence))
scored.append((score, -index, sentence))
ranked = [sentence for _, _, sentence in sorted(scored, reverse=True)]
unique_ranked = []
seen = set()
for sentence in ranked:
key = _normalize_key(sentence)
if key and key not in seen:
seen.add(key)
unique_ranked.append(sentence)
if len(unique_ranked) >= limit:
break
return unique_ranked
def _trim_fragment(text: str, max_length: int = 80) -> str:
fragment = re.split(r"[,。;;!?]", text, maxsplit=1)[0]
fragment = _clean_text(fragment)
return fragment[:max_length].strip()
def _extract_actions(texts: list[str]) -> list[dict[str, Any]]:
items: list[dict[str, Any]] = []
for text in texts:
for sentence in _split_sentences(text):
for pattern in ACTION_PREFIX_PATTERNS:
match = re.match(pattern, sentence, re.IGNORECASE)
if not match:
continue
task = _trim_fragment(match.group(1))
if len(task) < 2:
continue
items.append({"task": task, "owner": None, "due_at": None, "blocker": None})
break
dedup = []
seen = set()
for item in items:
key = _normalize_key(item["task"])
if key and key not in seen:
seen.add(key)
dedup.append(item)
return dedup[:10]
def _extract_risks(texts: list[str]) -> list[dict[str, Any]]:
risks: list[dict[str, Any]] = []
for text in texts:
for sentence in _split_sentences(text):
lowered = sentence.lower()
for impact, keywords in RISK_KEYWORDS.items():
matched = next((keyword for keyword in keywords if keyword.lower() in lowered), None)
if not matched:
continue
start = max(0, lowered.find(matched.lower()) - 18)
end = min(len(sentence), lowered.find(matched.lower()) + len(matched) + 30)
fragment = _clean_text(sentence[start:end])
fragment = fragment[:120]
if len(fragment) < 2:
continue
risks.append({"risk": fragment, "impact": impact, "mitigation": None})
break
dedup = []
seen = set()
for item in risks:
key = _normalize_key(item["risk"])
if key and key not in seen:
seen.add(key)
dedup.append(item)
return dedup[:10]
def _build_summary_line(sentences: list[str]) -> str:
if not sentences:
return "暂无可提炼的关键信息。"
selected = sentences[:2]
return "".join(selected)
def build_summary(payload: dict[str, Any]) -> dict[str, Any]:
data = payload.get("data", {})
sources = data.get("sources", [])
texts = [_clean_text(source.get("content", "")) for source in sources if _clean_text(source.get("content", ""))]
length = data.get("length", "standard")
style = data.get("style", "daily_report")
limit = SUMMARY_LENGTH_LIMITS.get(length, SUMMARY_LENGTH_LIMITS["standard"])
top_sentences = _top_sentences(texts, limit)
summary_line = _build_summary_line(top_sentences)
summary_keys = {_normalize_key(sentence) for sentence in top_sentences[:2]}
detail_sentences = [sentence for sentence in top_sentences if _normalize_key(sentence) not in summary_keys]
sections = []
if detail_sentences:
if len(detail_sentences) == 1:
sections = [{"title": "Key Updates", "bullets": detail_sentences}]
else:
midpoint = max(1, len(detail_sentences) // 2)
sections = [
{"title": "Key Updates", "bullets": detail_sentences[:midpoint]},
{"title": "Notable Details", "bullets": detail_sentences[midpoint:]},
]
action_items = _extract_actions(texts) if data.get("extract_actions") else []
risk_items = _extract_risks(texts) if data.get("extract_risks") else []
markdown_lines = ["# Summary", "", "## Summary", f"- {summary_line}"]
for section in sections:
if not section["bullets"]:
continue
markdown_lines.extend(["", f"## {section['title']}"])
markdown_lines.extend(f"- {bullet}" for bullet in section["bullets"])
if action_items:
markdown_lines.extend(["", "## Action Items"])
markdown_lines.extend(f"- {item['task']}" for item in action_items)
if risk_items:
markdown_lines.extend(["", "## Risks"])
markdown_lines.extend(f"- [{item['impact']}] {item['risk']}" for item in risk_items)
schedule_payload = {
"suggested_name": "Daily Summary",
"message": "[Scheduled Task Triggered] 请立即汇总最新内容并输出结构化摘要,如有行动项和风险请一并列出,然后选择合适的通知方式发送给用户。",
}
return {
"summary": summary_line,
"sections": sections,
"action_items": action_items,
"risk_items": risk_items,
"markdown": "\n".join(markdown_lines),
"schedule_payload": schedule_payload,
"style": style,
}
def _validate_source(source: Any, index: int) -> list[str]:
errors = []
if not isinstance(source, dict):
return [f"data.sources[{index}] must be an object"]
if not _clean_text(str(source.get("content", ""))):
errors.append(f"data.sources[{index}].content is required")
return errors
def validate_payload(payload: dict[str, Any]) -> list[str]:
errors = []
data = payload.get("data")
if not isinstance(data, dict):
return ["data must be an object"]
sources = data.get("sources")
if not isinstance(sources, list) or not sources:
errors.append("data.sources must be a non-empty array")
else:
for index, source in enumerate(sources):
errors.extend(_validate_source(source, index))
objective = data.get("objective")
if not isinstance(objective, str) or not objective.strip():
errors.append("data.objective is required")
style = data.get("style")
if style is not None and style not in VALID_STYLES:
errors.append(f"data.style must be one of {sorted(VALID_STYLES)}")
length = data.get("length")
if length is not None and length not in VALID_LENGTHS:
errors.append(f"data.length must be one of {sorted(VALID_LENGTHS)}")
for field in BOOL_FIELDS:
value = data.get(field)
if value is not None and not isinstance(value, bool):
errors.append(f"data.{field} must be a boolean")
return errors

View File

@ -0,0 +1,216 @@
---
name: competitor-news-intel
description: Research competitor news, organize developments by company and theme, and produce actionable competitive intelligence with impact assessment and follow-up recommendations. Use when the user asks for competitor monitoring, competitor news tracking, market watch summaries, or business intelligence from external updates. 中文触发词包括:竞品跟踪、竞对情报、竞品新闻、市场监听、舆情观察、竞品周报、最近竞品有什么动作。
---
# Competitor News Intelligence
## Overview
This skill monitors and synthesizes competitor-related news into actionable business intelligence.
It is appropriate when the user needs more than a list of links. The output should explain:
- what happened
- why it matters
- who it affects
- what to monitor next
## Quick Start
When the user asks for competitor research or monitoring:
1. Confirm the competitor set
2. Confirm the time range and region
3. Clarify what kinds of events matter
4. Retrieve or review relevant information
5. Organize it into a structured intelligence brief with impact assessment
### 中文任务映射
- “跟踪一下最近竞品动态” → `collect``run`
- “做一份竞对周报” → `run`
- “最近竞品有什么动作” → `collect`
- “帮我长期监控这几个竞品” → `plan-recurring` + `schedule-job`
- “看下竞品最近有没有融资/发布新产品” → `collect` + category filtering
## Input Requirements
| Field | Required | Description |
|-------|----------|-------------|
| competitors | yes | Company names, brands, or product lines |
| objective | yes | Monitoring, weekly digest, event scan, strategic watch |
| time range | no | Today, past 7 days, month, quarter, custom |
| geography | no | Country, region, or market |
| event categories | no | Product launch, pricing, partnership, hiring, financing, regulation, PR, channel |
| output depth | no | Brief scan / standard intelligence / detailed watch |
| audience | no | Founder, strategy team, sales, product, leadership |
## Workflow Decision Tree
### Quick Monitoring Brief
Use this for short competitor update summaries.
### Standard Intelligence Brief
Use this for grouped event analysis with implications.
### Strategic Watch
Use this when the user wants patterns, momentum, and what to watch next.
### Recurring Monitoring
Use this when the user wants periodic competitor watch outputs. Pair with `schedule-job`.
## Instructions
### Step 1: Define monitoring scope
Clarify:
- which competitors matter most
- which kinds of events matter most
- what decision the monitoring should support
### Step 2: Gather evidence
Use available search skills such as `baidu-search` when current information is needed.
For each relevant update, capture:
- competitor
- date
- event type
- short description
- source
### Step 3: Classify events
Typical categories:
- product / feature launch
- pricing or packaging change
- partnership or channel move
- hiring or org change
- financing or M&A
- regulatory or compliance issue
- brand or PR movement
### Step 4: Assess impact
For each important event, explain:
- likely business impact
- urgency level
- affected function (sales, product, strategy, marketing)
- whether follow-up monitoring is needed
### Step 5: Produce intelligence output
Do not stop at listing news. Synthesize patterns across competitors when possible.
## Scripts
### CLI Usage
Use the following commands when you need stable structured outputs:
```bash
poetry run python skills/competitor-news-intel/scripts/intel_cli.py collect --input-json '<JSON>'
poetry run python skills/competitor-news-intel/scripts/intel_cli.py analyze --input-json '<JSON>' --output json
poetry run python skills/competitor-news-intel/scripts/intel_cli.py run --input-json '<JSON>' --output json
poetry run python skills/competitor-news-intel/scripts/intel_cli.py plan-recurring --input-json '<JSON>'
```
### Recommended Uses
- `collect` - gather candidate competitor developments
- `analyze` - classify, deduplicate, and assess impact from collected events
- `run` - complete end-to-end intelligence generation
- `plan-recurring` - generate a schedule-ready monitoring message for `schedule-job`
- Real-time collection requires `BAIDU_API_KEY`
```markdown
# Competitor News Intelligence Brief
## Summary
[Short overview of the competitive landscape during the period]
## Monitoring Scope
- Competitors:
- Time range:
- Geography:
- Key event categories:
## Key Developments
### [Competitor / Event]
- Date:
- Category:
- What happened:
- Why it matters:
- Impact level: Low / Medium / High
- Suggested follow-up:
## Cross-Competitor Patterns
- [Pattern]
## Risks and Opportunities for Us
- [Implication]
## Watch List
- [Item to keep monitoring]
## Source Log
- [Source] - [Date] - [Competitor] - [Headline or key point]
```
## Quality Checklist
Before finalizing, verify:
- the scope matches the requested competitors and timeframe
- event categories are consistent
- impact labels are justified, not arbitrary
- links or sources are attributable
- repeated news is de-duplicated
- the brief includes implications, not just headlines
## Fallback Strategy
If the evidence is sparse:
- return a lighter monitoring brief
- highlight missing visibility
- recommend additional competitors, keywords, or sources to track
## Related Skills
- `skills/baidu-search/SKILL.md` - retrieve current external information
- `skills/auto-daily-summary/SKILL.md` - condense larger result sets into shorter periodic summaries
- `skills/schedule-job/SKILL.md` - automate recurring competitor monitoring
- `skills/market-academic-insight/SKILL.md` - use when the task broadens into industry or technology research
## Examples
**User**: "帮我跟踪一下最近一周几家竞品的新闻"
Expected output:
- structured competitor brief
- event categorization
- impact assessment
- watch list
**User**: "做一份竞对情报周报"
Expected output:
- weekly summary
- grouped developments
- cross-competitor patterns
- implications for our team
**User**: "最近竞品有什么动作?"
Expected output:
- recent developments
- event categories
- impact notes
**User**: "帮我长期监控这几个竞品"
Expected output:
- monitoring structure
- recommendation to combine with `schedule-job`
- suggested recurring payload
**User**: "看下竞品最近有没有融资或者发新品"
Expected output:
- filtered developments
- impact assessment
- follow-up watch list

View File

@ -0,0 +1,204 @@
#!/usr/bin/env python3
import argparse
import json
import sys
from datetime import datetime, UTC
from intel_core import (
analyze_events,
validate_analyze_payload,
validate_collect_payload,
validate_run_payload,
)
from search_provider import (
InvalidSearchInputError,
MissingAPIKeyError,
SearchProviderError,
UnsupportedProviderError,
UpstreamHTTPError,
collect_events,
)
ERROR_TEMPLATE = {
"success": False,
"code": "invalid_input",
"message": "",
"data": {},
"meta": {},
"errors": [],
}
def _now_iso() -> str:
return datetime.now(UTC).isoformat()
def _emit(data: dict, pretty: bool, stream=None):
print(json.dumps(data, ensure_ascii=False, indent=2 if pretty else None), file=stream or sys.stdout)
def _error_response(code: str, message: str, errors: list[str] | None = None) -> dict:
return {
**ERROR_TEMPLATE,
"code": code,
"message": message,
"meta": {"generated_at": _now_iso()},
"errors": errors or [],
}
def _load_payload(raw: str) -> dict:
return json.loads(raw)
def _override_search(payload: dict, args: argparse.Namespace) -> dict:
payload.setdefault("data", {})
payload["data"].setdefault("search", {})
if getattr(args, "provider", None):
payload["data"]["search"]["provider"] = args.provider
if getattr(args, "freshness", None):
payload["data"]["search"]["freshness"] = args.freshness
if getattr(args, "count", None) is not None:
payload["data"]["search"]["count"] = args.count
if getattr(args, "max_events", None) is not None:
payload["max_events"] = args.max_events
return payload
def cmd_collect(args: argparse.Namespace):
payload = _override_search(_load_payload(args.input_json), args)
errors = validate_collect_payload(payload)
if errors:
_emit(_error_response("invalid_input", "payload invalid", errors), args.pretty, sys.stderr)
raise SystemExit(1)
events = collect_events(payload)
result = {
"success": True,
"code": "ok",
"message": "events collected",
"data": {"developments": events},
"meta": {"generated_at": _now_iso(), "source_count": len(events)},
"errors": [],
}
_emit(result, args.pretty)
def cmd_analyze(args: argparse.Namespace):
payload = _override_search(_load_payload(args.input_json), args)
errors = validate_analyze_payload(payload)
if errors:
_emit(_error_response("invalid_input", "payload invalid", errors), args.pretty, sys.stderr)
raise SystemExit(1)
events = payload.get("data", {}).get("developments", [])
data = analyze_events(payload, events)
result = {
"success": True,
"code": "ok",
"message": "intelligence analyzed",
"data": data,
"meta": {
"generated_at": _now_iso(),
"raw_count": data.get("stats", {}).get("raw_count", len(events)),
"dedup_count": data.get("stats", {}).get("dedup_count", len(events)),
"returned_count": data.get("stats", {}).get("returned_count", len(data.get("developments", []))),
},
"errors": [],
}
if args.output == "markdown":
print(data["markdown"])
return
_emit(result, args.pretty)
def cmd_run(args: argparse.Namespace):
payload = _override_search(_load_payload(args.input_json), args)
errors = validate_run_payload(payload)
if errors:
_emit(_error_response("invalid_input", "payload invalid", errors), args.pretty, sys.stderr)
raise SystemExit(1)
events = collect_events(payload)
data = analyze_events(payload, events)
result = {
"success": True,
"code": "ok",
"message": "intelligence brief generated",
"data": data,
"meta": {
"generated_at": _now_iso(),
"raw_count": data.get("stats", {}).get("raw_count", len(events)),
"dedup_count": data.get("stats", {}).get("dedup_count", len(events)),
"returned_count": data.get("stats", {}).get("returned_count", len(data.get("developments", []))),
},
"errors": [],
}
if args.output == "markdown":
print(data["markdown"])
return
_emit(result, args.pretty)
def cmd_plan_recurring(args: argparse.Namespace):
payload = _override_search(_load_payload(args.input_json), args)
errors = validate_run_payload(payload)
if errors:
_emit(_error_response("invalid_input", "payload invalid", errors), args.pretty, sys.stderr)
raise SystemExit(1)
data = analyze_events(payload, [])
result = {
"success": True,
"code": "ok",
"message": "recurring plan generated",
"data": {"schedule_payload": data["schedule_payload"]},
"meta": {"generated_at": _now_iso()},
"errors": [],
}
_emit(result, args.pretty)
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description="Generate competitor intelligence")
subparsers = parser.add_subparsers(dest="command", required=True)
for name in ["collect", "analyze", "run", "plan-recurring"]:
sub = subparsers.add_parser(name)
sub.add_argument("--input-json", required=True)
sub.add_argument("--provider")
sub.add_argument("--freshness")
sub.add_argument("--count", type=int)
sub.add_argument("--max-events", type=int)
sub.add_argument("--pretty", action="store_true")
if name in {"analyze", "run"}:
sub.add_argument("--output", choices=["json", "markdown"], default="json")
subparsers.choices["collect"].set_defaults(func=cmd_collect)
subparsers.choices["analyze"].set_defaults(func=cmd_analyze)
subparsers.choices["run"].set_defaults(func=cmd_run)
subparsers.choices["plan-recurring"].set_defaults(func=cmd_plan_recurring)
return parser
if __name__ == "__main__":
parser = build_parser()
args = parser.parse_args()
try:
args.func(args)
except json.JSONDecodeError as exc:
_emit(_error_response("invalid_input", f"invalid json: {exc}", [str(exc)]), getattr(args, "pretty", False), sys.stderr)
raise SystemExit(1)
except MissingAPIKeyError as exc:
_emit(_error_response("missing_api_key", str(exc), [str(exc)]), getattr(args, "pretty", False), sys.stderr)
raise SystemExit(1)
except UnsupportedProviderError as exc:
_emit(_error_response("unsupported_provider", str(exc), [str(exc)]), getattr(args, "pretty", False), sys.stderr)
raise SystemExit(1)
except InvalidSearchInputError as exc:
_emit(_error_response("invalid_input", str(exc), [str(exc)]), getattr(args, "pretty", False), sys.stderr)
raise SystemExit(1)
except UpstreamHTTPError as exc:
_emit(_error_response("upstream_http_error", str(exc), [str(exc)]), getattr(args, "pretty", False), sys.stderr)
raise SystemExit(1)
except SearchProviderError as exc:
_emit(_error_response("provider_error", str(exc), [str(exc)]), getattr(args, "pretty", False), sys.stderr)
raise SystemExit(1)
except Exception as exc:
_emit(_error_response("internal_error", "unexpected error", [str(exc)]), getattr(args, "pretty", False), sys.stderr)
raise SystemExit(1)

View File

@ -0,0 +1,268 @@
from collections import Counter
import hashlib
import re
from typing import Any
CATEGORY_KEYWORDS = {
"product": ["发布", "新品", "功能", "product", "launch", "上线"],
"pricing": ["价格", "降价", "涨价", "套餐", "pricing"],
"partnership": ["合作", "联盟", "partnership", "channel", "生态"],
"hiring": ["招聘", "任命", "高管", "hiring", "团队"],
"financing": ["融资", "投资", "并购", "funding", "m&a", "收购"],
"regulation": ["监管", "合规", "处罚", "regulation", "政策"],
"pr": ["宣传", "活动", "品牌", "舆情", "pr", "采访"],
}
IMPACT_HINTS = {
"high": ["融资", "并购", "重大", "战略", "首发", "收购", "独家"],
"medium": ["发布", "合作", "上线", "调整", "扩张", "升级"],
"low": ["活动", "采访", "媒体", "观察", "亮相"],
}
VALID_OBJECTIVES = {"monitor", "weekly_digest", "event_scan", "strategic_watch"}
VALID_DEPTHS = {"brief", "standard", "detailed"}
VALID_CATEGORIES = set(CATEGORY_KEYWORDS.keys())
def _clean_text(text: Any) -> str:
return re.sub(r"\s+", " ", str(text or "")).strip()
def _normalize_key(text: str) -> str:
return re.sub(r"[^\w\u4e00-\u9fff]+", "", _clean_text(text).lower())
def _stable_event_id(event: dict[str, Any]) -> str:
raw = "|".join(
[
_clean_text(event.get("competitor")),
_clean_text(event.get("date")),
_clean_text(event.get("title")),
_clean_text(event.get("source_url")),
_clean_text(event.get("summary"))[:120],
]
)
return hashlib.sha1(raw.encode("utf-8")).hexdigest()[:12]
def _category_scores(text: str) -> dict[str, int]:
lowered = (text or "").lower()
scores = {category: 0 for category in CATEGORY_KEYWORDS}
for category, keywords in CATEGORY_KEYWORDS.items():
for keyword in keywords:
scores[category] += lowered.count(keyword.lower())
return scores
def _classify_category(text: str) -> tuple[str, list[str]]:
scores = _category_scores(text)
best_category, best_score = max(scores.items(), key=lambda item: item[1])
if best_score <= 0:
return "pr", []
evidence = [keyword for keyword in CATEGORY_KEYWORDS[best_category] if keyword.lower() in (text or "").lower()]
return best_category, evidence[:5]
def _impact_level(text: str) -> tuple[str, list[str]]:
lowered = (text or "").lower()
score_map = {"low": 0, "medium": 0, "high": 0}
evidence_map = {"low": [], "medium": [], "high": []}
for level, keywords in IMPACT_HINTS.items():
for keyword in keywords:
count = lowered.count(keyword.lower())
if count:
score_map[level] += count
evidence_map[level].append(keyword)
if score_map["high"] > 0:
return "high", evidence_map["high"][:5]
if score_map["medium"] > 0:
return "medium", evidence_map["medium"][:5]
if score_map["low"] > 0:
return "low", evidence_map["low"][:5]
return "low", []
def _affected_functions(category: str) -> list[str]:
if category in {"financing", "partnership", "regulation"}:
return ["strategy", "sales"]
if category in {"product", "pricing"}:
return ["product", "sales"]
return ["strategy"]
def _dedup_events(events: list[dict[str, Any]]) -> tuple[list[dict[str, Any]], int]:
dedup = []
seen = set()
raw_count = len(events)
for event in events:
dedup_key = (
_normalize_key(event.get("competitor", "")),
_clean_text(event.get("date", "")),
_normalize_key(event.get("title", "")),
_normalize_key(event.get("source_url", "")),
_normalize_key(event.get("summary", ""))[:60],
)
if dedup_key in seen:
continue
seen.add(dedup_key)
dedup.append(event)
return dedup, raw_count
def analyze_events(payload: dict[str, Any], events: list[dict[str, Any]]) -> dict[str, Any]:
max_events = int(payload.get("max_events") or payload.get("data", {}).get("search", {}).get("count", 10))
dedup, raw_count = _dedup_events(events)
developments = []
impact_assessment = []
category_counter = Counter()
for event in dedup[:max_events]:
text = f"{event.get('title', '')} {event.get('summary', '')}"
category, category_evidence = _classify_category(text)
level, impact_evidence = _impact_level(text)
event_id = _stable_event_id(event)
category_counter.update([category])
normalized = {
**event,
"event_id": event_id,
"category": event.get("category") or category,
"confidence": "high" if category_evidence else "low",
}
developments.append(normalized)
impact_assessment.append(
{
"event_id": event_id,
"level": level,
"affected_functions": _affected_functions(category),
"urgency": "act" if level == "high" else "watch",
"rationale": f"Event categorized as {category} with {level} impact.",
"evidence_keywords": {"category": category_evidence, "impact": impact_evidence},
}
)
cross_patterns = [f"{category} appears {count} time(s)" for category, count in category_counter.most_common(3)]
risks_opportunities = []
for assessment in impact_assessment[:3]:
risks_opportunities.append(
{
"type": "risk" if assessment["level"] == "high" else "opportunity",
"rationale": assessment["rationale"],
"suggested_action": "持续跟踪并评估是否需要产品、市场或销售响应。",
}
)
watch_list = [dev["title"] for dev in developments[:5] if dev.get("title")]
markdown_lines = ["# Competitor News Intelligence Brief", "", "## Summary"]
if developments:
markdown_lines.extend(f"- {item['title']}" for item in developments[:5] if item.get("title"))
else:
markdown_lines.append("- No developments analyzed.")
markdown_lines.extend(["", "## Key Developments"])
for dev in developments[:5]:
markdown_lines.extend(
[
f"### {dev.get('competitor', 'Unknown')} / {dev.get('title', 'Untitled')}",
f"- Date: {dev.get('date', '')}",
f"- Category: {dev.get('category', 'unknown')}",
f"- What happened: {dev.get('summary', '')}",
]
)
if cross_patterns:
markdown_lines.extend(["", "## Cross-Competitor Patterns"])
markdown_lines.extend(f"- {item}" for item in cross_patterns)
schedule_payload = {
"suggested_name": "Competitor Weekly Intel",
"message": "[Scheduled Task Triggered] 请立即抓取并整理竞品动态,输出结构化竞对情报简报、影响评估和 watch list然后选择合适的通知方式发送给用户。",
}
return {
"developments": developments,
"impact_assessment": impact_assessment,
"cross_patterns": cross_patterns,
"risks_opportunities": risks_opportunities,
"watch_list": watch_list,
"markdown": "\n".join(markdown_lines),
"schedule_payload": schedule_payload,
"stats": {"raw_count": raw_count, "dedup_count": len(dedup), "returned_count": len(developments)},
}
def _validate_competitors(data: dict[str, Any]) -> list[str]:
errors = []
competitors = data.get("competitors")
if not isinstance(competitors, list) or not competitors:
return ["data.competitors must be a non-empty array"]
for index, competitor in enumerate(competitors):
if not isinstance(competitor, str) or not competitor.strip():
errors.append(f"data.competitors[{index}] must be a non-empty string")
return errors
def _validate_developments(data: dict[str, Any]) -> list[str]:
errors = []
developments = data.get("developments")
if not isinstance(developments, list) or not developments:
return ["data.developments must be a non-empty array"]
for index, development in enumerate(developments):
if not isinstance(development, dict):
errors.append(f"data.developments[{index}] must be an object")
continue
if not _clean_text(development.get("title", "")):
errors.append(f"data.developments[{index}].title is required")
if not _clean_text(development.get("summary", "")):
errors.append(f"data.developments[{index}].summary is required")
return errors
def _validate_common(data: dict[str, Any]) -> list[str]:
errors = []
objective = data.get("objective")
if not isinstance(objective, str) or not objective.strip():
errors.append("data.objective is required")
elif objective not in VALID_OBJECTIVES:
errors.append(f"data.objective must be one of {sorted(VALID_OBJECTIVES)}")
output_depth = data.get("output_depth")
if output_depth is not None and output_depth not in VALID_DEPTHS:
errors.append(f"data.output_depth must be one of {sorted(VALID_DEPTHS)}")
event_categories = data.get("event_categories")
if event_categories is not None:
if not isinstance(event_categories, list):
errors.append("data.event_categories must be an array")
else:
for index, category in enumerate(event_categories):
if category not in VALID_CATEGORIES:
errors.append(f"data.event_categories[{index}] must be one of {sorted(VALID_CATEGORIES)}")
return errors
def validate_collect_payload(payload: dict[str, Any]) -> list[str]:
data = payload.get("data")
if not isinstance(data, dict):
return ["data must be an object"]
errors = _validate_common(data) + _validate_competitors(data)
search = data.get("search", {})
if search and not isinstance(search, dict):
errors.append("data.search must be an object")
return errors
provider = search.get("provider")
if provider is not None and provider != "baidu":
errors.append("data.search.provider must be baidu")
count = search.get("count")
if count is not None and (not isinstance(count, int) or count <= 0 or count > 50):
errors.append("data.search.count must be an integer between 1 and 50")
freshness = search.get("freshness") or data.get("time_range")
if freshness is not None and not isinstance(freshness, str):
errors.append("freshness must be a string")
return errors
def validate_analyze_payload(payload: dict[str, Any]) -> list[str]:
data = payload.get("data")
if not isinstance(data, dict):
return ["data must be an object"]
return _validate_common(data) + _validate_competitors(data) + _validate_developments(data)
def validate_run_payload(payload: dict[str, Any]) -> list[str]:
return validate_collect_payload(payload)

View File

@ -0,0 +1,106 @@
import os
import re
from datetime import datetime, timedelta
from typing import Any
import requests
class SearchProviderError(Exception):
code = "provider_error"
class MissingAPIKeyError(SearchProviderError):
code = "missing_api_key"
class UnsupportedProviderError(SearchProviderError):
code = "unsupported_provider"
class UpstreamHTTPError(SearchProviderError):
code = "upstream_http_error"
class InvalidSearchInputError(SearchProviderError):
code = "invalid_input"
def _build_search_filter(freshness: str | None) -> dict[str, Any]:
if not freshness:
return {}
current_time = datetime.now()
end_date = (current_time + timedelta(days=1)).strftime("%Y-%m-%d")
if freshness == "pd":
start_date = (current_time - timedelta(days=1)).strftime("%Y-%m-%d")
elif freshness == "pw":
start_date = (current_time - timedelta(days=6)).strftime("%Y-%m-%d")
elif freshness == "pm":
start_date = (current_time - timedelta(days=30)).strftime("%Y-%m-%d")
elif freshness == "py":
start_date = (current_time - timedelta(days=364)).strftime("%Y-%m-%d")
elif re.fullmatch(r"\d{4}-\d{2}-\d{2}to\d{4}-\d{2}-\d{2}", freshness):
start_date, end_date = freshness.split("to")
else:
raise InvalidSearchInputError("freshness must be pd, pw, pm, py, or YYYY-MM-DDtoYYYY-MM-DD")
return {"range": {"page_time": {"gte": start_date, "lt": end_date}}}
def baidu_search(query: str, count: int = 10, freshness: str | None = None) -> list[dict[str, Any]]:
api_key = os.getenv("BAIDU_API_KEY")
if not api_key:
raise MissingAPIKeyError("BAIDU_API_KEY must be set")
try:
response = requests.post(
"https://qianfan.baidubce.com/v2/ai_search/web_search",
headers={
"Authorization": f"Bearer {api_key}",
"X-Appbuilder-From": "openclaw",
"Content-Type": "application/json",
},
json={
"messages": [{"content": query, "role": "user"}],
"search_source": "baidu_search_v2",
"resource_type_filter": [{"type": "web", "top_k": max(1, min(count, 50))}],
"search_filter": _build_search_filter(freshness),
},
timeout=30,
)
response.raise_for_status()
except requests.HTTPError as exc:
raise UpstreamHTTPError(str(exc)) from exc
except requests.RequestException as exc:
raise UpstreamHTTPError(str(exc)) from exc
payload = response.json()
if "code" in payload:
raise UpstreamHTTPError(payload.get("message", "baidu search failed"))
return payload.get("references", [])
def collect_events(payload: dict[str, Any]) -> list[dict[str, Any]]:
data = payload.get("data", {})
competitors = data.get("competitors", [])
search_cfg = data.get("search", {})
provider = search_cfg.get("provider", "baidu")
if provider != "baidu":
raise UnsupportedProviderError(f"provider {provider} is not supported")
freshness = search_cfg.get("freshness") or data.get("time_range")
count = int(search_cfg.get("count", 10))
keywords_extra = search_cfg.get("keywords_extra", [])
categories = data.get("event_categories", [])
events = []
for competitor in competitors:
query_parts = [competitor, *keywords_extra, *categories]
query = " ".join(str(part) for part in query_parts if part)
for item in baidu_search(query=query, count=count, freshness=freshness):
events.append(
{
"competitor": competitor,
"date": item.get("page_time") or "",
"title": item.get("title") or "",
"summary": item.get("abstract") or item.get("content") or "",
"source_url": item.get("url") or "",
"source_name": item.get("site_name") or "baidu",
}
)
return events

View File

@ -0,0 +1,187 @@
---
name: contract-document-generator
description: Draft contracts and formal business documents, rewrite clauses, identify risks, and organize negotiation-ready language. Use when the user asks for contract drafting, clause revision, legal-style document generation, formal agreement structuring, or document-ready policy and terms content. 中文触发词包括:合同起草、协议生成、条款修改、风险审查、保密协议、正式文档撰写。
---
# Contract & Document Generator
## Overview
This skill handles the **content layer** of contracts and formal documents:
- drafting and rewriting clauses
- structuring agreements
- highlighting risks and ambiguities
- preparing negotiation-ready revisions
It does **not** replace file-format skills. Use `docx` and `pdf` for the final document container when needed.
## Quick Start
When the user asks for a contract or formal document:
1. Identify the document type
2. Clarify the governing context and required clauses
3. Confirm whether the task is drafting, editing, summarizing, or risk review
4. Produce a structured output with clear labels for assumptions and unresolved items
5. If the user needs a file, pass the final content to `docx` or `pdf`
### 中文任务映射
- “起草一份合同” → new draft
- “改一下这段条款” → clause revision
- “审一下风险” → risk review
- “整理成正式文件” → draft + docx/pdf handoff
## Input Requirements
| Field | Required | Description |
|-------|----------|-------------|
| document type | yes | NDA, service agreement, employment clause, terms, policy, notice, memo |
| objective | yes | Drafting, revision, review, comparison, simplification |
| parties / stakeholders | no | The involved entities or roles |
| jurisdiction / governing law | no | Legal or regional context |
| must-have clauses | no | Required provisions |
| prohibited or risky clauses | no | Clauses to avoid or watch |
| tone / style | no | Formal, plain language, business-friendly, negotiation-ready |
| output format | no | Clause list, full draft, risk memo, redline guidance |
## Workflow Decision Tree
### New Draft
Use when the user wants a first version of a contract or formal document.
### Clause Revision
Use when the user wants to rewrite specific language, tighten wording, or simplify terms.
### Risk Review
Use when the user wants to understand what is risky, ambiguous, one-sided, or incomplete.
### Comparison / Negotiation Support
Use when the user wants a position memo, fallback language, or issue-by-issue negotiation guidance.
## Instructions
### Step 1: Define document role
Clarify what the document is supposed to do:
- bind parties
- allocate risk
- state responsibilities
- define process
- provide internal or external communication
### Step 2: Identify the minimum structure
Typical sections may include:
- parties and definitions
- scope
- payment or consideration
- obligations
- confidentiality
- IP ownership
- term and termination
- liability and indemnity
- dispute resolution
- notices
Only include sections relevant to the users objective.
### Step 3: Mark assumptions explicitly
If party names, law, numbers, dates, or scope are missing, mark them as placeholders instead of inventing them.
### Step 4: Review for risk and ambiguity
Check for:
- undefined terms
- missing triggers or deadlines
- one-sided liability allocation
- vague performance obligations
- inconsistent terms across sections
### Step 5: Package the output for the users goal
Depending on the request, return one of:
- full draft
- clause alternatives
- risk memo
- revision guidance
- negotiation checklist
## Output Format
### Full Draft Mode
```markdown
# [Document Title]
## Draft Notes
- Purpose:
- Assumptions:
- Jurisdiction status:
## Draft
[Full structured document]
## Open Items
- [Missing information]
```
### Risk Review Mode
```markdown
# Contract Risk Review
## Summary
[Short overall view]
## Key Risks
### 1. [Risk]
- Clause / section:
- Why it matters:
- Suggested revision:
## Missing Terms
- [Item]
## Negotiation Suggestions
- [Suggestion]
```
## Quality Checklist
Before finalizing, verify:
- placeholders are clearly marked
- legal assumptions are not presented as confirmed facts
- obligations, timing, and consequences are clear
- defined terms are used consistently
- risk comments are concrete and actionable
- document structure matches the users purpose
## Fallback Strategy
If legal context is unclear:
- provide a business draft
- mark jurisdiction-specific items as requiring legal review
- avoid pretending to give definitive legal advice
## Related Skills
- `skills/docx/SKILL.md` - generate a formal `.docx` document or tracked-change revision
- `skills/pdf/SKILL.md` - extract, archive, or distribute final documents in PDF
## Examples
**User**: "帮我起草一份软件服务合同"
Expected output:
- structured draft
- placeholders for missing commercial terms
- open items section
**User**: "把这段保密条款改得更平衡一些"
Expected output:
- revised clause
- explanation of changes
- negotiation rationale if useful
**User**: "审一下这份合同有哪些风险"
Expected output:
- risk summary
- clause-by-clause risks
- suggested revisions

View File

@ -0,0 +1,181 @@
---
name: financial-report-generator
description: Generate management-friendly financial reporting outputs from structured financial data, including KPI summaries, variance analysis, risk notes, and reporting narratives. Use when the user asks for financial reports, management reporting, monthly or quarterly performance summaries, or finance-oriented document generation. 中文触发词包括:财务月报、财务季报、经营分析、管理层汇报、董事会报告、财务简报。
---
# Financial Report Generator
## Overview
This skill turns financial data into a reporting package that is readable, auditable, and decision-oriented.
It is **not** a replacement for the underlying spreadsheet skill. Instead, it sits above spreadsheet handling and focuses on:
- metric interpretation
- variance explanation
- management reporting structure
- finance-oriented narrative output
## Quick Start
When the user asks for a financial report:
1. Confirm the reporting period and reporting objective
2. Identify the source data format (`xlsx`, `csv`, exported tables, manual numbers)
3. Confirm the reporting audience (operator, finance lead, management, board, investor)
4. If spreadsheets need to be read or generated, reuse the `xlsx` skill
5. Produce a finance brief that explains **what changed**, **why it matters**, and **what to do next**
### 中文任务映射
- “做一份财务月报” → standard financial report
- “整理成董事会简报” → board / management narrative
- “看下本月经营数据有什么异常” → KPI summary + variance analysis
## Input Requirements
| Field | Required | Description |
|-------|----------|-------------|
| reporting objective | yes | Why this report is needed |
| reporting period | yes | Month, quarter, year, or custom date range |
| data source | yes | File, table, pasted data, or existing workbook |
| currency / unit | no | Currency and scale such as RMB, USD, thousands, millions |
| key metrics | no | Revenue, gross margin, burn, CAC, payback, cashflow, etc. |
| comparison basis | no | vs budget, vs last month, vs last quarter, vs last year |
| audience | no | Finance, management, board, investor |
| output format | no | Markdown, HTML outline, DOCX-ready outline, XLSX companion |
## Workflow Decision Tree
### KPI Summary Only
Use this when the user only needs a concise performance snapshot.
### Standard Financial Report
Use this when the user needs:
- core metrics
- variance analysis
- business interpretation
- risk notes
- recommendations
### Board / Management Narrative
Use this when the user needs a report suitable for leadership review, not just raw data output.
## Instructions
### Step 1: Normalize the reporting frame
Clarify:
- reporting period
- comparison basis
- unit and currency
- whether numbers are actuals, budget, forecast, or scenario assumptions
### Step 2: Identify core metrics
Select the metrics that matter for the objective. Typical categories:
- revenue and growth
- cost and margin
- expense structure
- cash and runway
- customer economics
- forecast vs actual variance
### Step 3: Explain movements
For material changes, answer:
- what changed
- compared with what
- likely driver
- business significance
Do not just restate percentages without interpretation.
### Step 4: Separate data from commentary
Keep these layers distinct:
- reported numbers
- derived observations
- management interpretation
- recommendation or follow-up
### Step 5: Recommend output packaging
If the user needs a file artifact:
- use `xlsx` for workbook generation or structured tables
- use `docx` for formal reporting documents
- use `pdf` for final distribution or archival
## Output Format
```markdown
# [Financial Report Title]
## Executive Summary
[Short summary of performance, movement, and implications]
## Reporting Scope
- Period:
- Comparison basis:
- Currency / unit:
- Audience:
## KPI Snapshot
| Metric | Current | Comparison | Variance | Comment |
|--------|---------|------------|----------|---------|
## Key Drivers
### 1. [Driver]
- What changed:
- Why it changed:
- Business implication:
## Risks and Watch Items
- [Risk]
## Recommended Actions
1. [Action]
2. [Action]
## Data Notes
- Assumptions:
- Missing fields:
- Confidence / caveats:
```
## Quality Checklist
Before finalizing, verify:
- units and currency are explicit
- actuals, budget, and forecast are not mixed without labeling
- large variances are explained, not merely listed
- missing assumptions are disclosed
- conclusions are tied to metrics
- output is understandable for the stated audience
## Fallback Strategy
If the data is incomplete:
- provide a partial report with clear caveats
- mark where assumptions were required
- list the missing fields needed for a full report
## Related Skills
- `skills/xlsx/SKILL.md` - spreadsheet analysis, workbook generation, formula discipline
- `skills/docx/SKILL.md` - create formal management or board documents
- `skills/pdf/SKILL.md` - generate or process final PDF outputs
## Examples
**User**: "根据这份月度财务表,帮我做一份管理层月报"
Expected output:
- KPI summary
- major variances
- business interpretation
- risk notes
- action recommendations
**User**: "把这份季度经营数据整理成董事会能看的报告结构"
Expected output:
- executive summary
- KPI snapshot
- key drivers
- watch items
- recommended action framing

View File

@ -0,0 +1,194 @@
---
name: market-academic-insight
description: Generate structured market research and academic insight briefs with clear evidence, trends, risks, and opportunities. Use when the user asks for industry research, market trends, literature review, academic progress tracking, or evidence-based insight synthesis. 中文触发词包括:行业洞察、市场研究、学术综述、论文进展、趋势分析、研究简报。
---
# Market & Academic Insight
## Overview
This skill produces structured research briefs for two closely related scenarios:
1. **Market insight** - industry trends, company landscape, competitor movement, regional opportunity, policy impact
2. **Academic insight** - literature scan, research progress summary, topic synthesis, evidence comparison, research gaps
Use this skill when the user needs **research synthesis and judgment**, not just raw search results.
## Quick Start
When the user requests research or insight generation:
1. Identify whether the request is primarily **market**, **academic**, or **hybrid**
2. Clarify the topic, time range, geography, audience, and output depth
3. If live information is required, use existing search skills such as `baidu-search`
4. Synthesize findings into a structured brief with **evidence separated from conclusions**
### 中文任务映射
- “做一份行业洞察” → market
- “总结一下论文进展” → academic
- “分析这个技术的产业机会” → hybrid
- “整理成研究简报” → standard brief
## Input Requirements
Collect the following information before producing the final brief:
| Field | Required | Description |
|-------|----------|-------------|
| topic | yes | Research topic, industry, company, or academic theme |
| mode | yes | `market` / `academic` / `hybrid` |
| objective | yes | What decision or understanding this research should support |
| time range | no | Recent month, quarter, year, or custom range |
| geography | no | Country, region, or market scope |
| audience | no | Executive, product team, investor, researcher, student |
| output language | no | Language for the final brief |
| depth | no | Quick brief / standard report / deep dive |
|
If information is missing, ask only for the fields that materially change the output.
## Workflow Decision Tree
### Market Insight Workflow
Use this path when the user asks about:
- market trends
- industry landscape
- competitor movement
- customer demand shifts
- regulatory or policy effects
- opportunity and risk assessment
### Academic Insight Workflow
Use this path when the user asks about:
- literature review
- paper synthesis
- research frontier
- evidence comparison
- methodology trends
- open questions or research gaps
### Hybrid Workflow
Use this path when the user wants both:
- market adoption + academic progress
- commercial relevance of a research area
- industry impact of an emerging technology
## Instructions
### Step 1: Define scope
Restate the research target in a precise sentence:
- what is being studied
- why it matters
- what decision it should support
### Step 2: Gather evidence
Prefer recent, attributable sources. If live retrieval is needed, use `baidu-search` or other enabled search tools.
For each key source, capture:
- source name
- date
- relevant claim or data point
- confidence or limitation
### Step 3: Separate facts from interpretation
Always distinguish:
- **Evidence**: reported facts, data, quotes, findings
- **Analysis**: what those facts imply
- **Speculation**: what may happen next
Never present an assumption as a confirmed fact.
### Step 4: Synthesize by theme
Group findings into 3-6 themes such as:
- growth drivers
- demand shifts
- technology maturity
- methodological differences
- adoption barriers
- competitive positioning
### Step 5: Produce conclusions
End with concise insight statements that answer the users objective, not just summarize materials.
## Output Format
Use the following structure by default:
```markdown
# [Title]
## Executive Summary
[3-6 sentence summary]
## Research Scope
- Topic:
- Mode:
- Time range:
- Geography:
- Audience:
## Key Findings
### 1. [Theme]
- Evidence:
- Interpretation:
- Implication:
### 2. [Theme]
- Evidence:
- Interpretation:
- Implication:
## Risks and Uncertainties
- [Risk / limitation]
## Opportunities or Next Steps
- [Actionable recommendation]
## Evidence Log
- [Source] - [Date] - [Key point]
```
## Quality Checklist
Before finalizing, verify:
- conclusions directly answer the users goal
- evidence and judgment are clearly separated
- time range and geography are explicit when relevant
- contradictory evidence is acknowledged
- outdated or weak evidence is labeled as such
- no fabricated citations or unverified claims are included
## Fallback Strategy
If evidence is limited:
- say what is known
- say what remains uncertain
- suggest what additional sources or validation would improve confidence
Do not invent detail to make the brief look complete.
## Related Skills
- `skills/baidu-search/SKILL.md` - retrieve current external information
- `skills/auto-daily-summary/SKILL.md` - condense a large evidence set into a recurring summary
- `skills/competitor-news-intel/SKILL.md` - competitor-focused monitoring and intelligence
## Examples
**User**: "帮我做一份中国 AI Agent 市场趋势洞察"
Output should include:
- market scope and timeframe
- major players and movement
- demand signals
- risks and opportunities
- evidence log
**User**: "总结一下多模态检索近一年的学术进展"
Output should include:
- research scope
- major themes
- representative findings
- open research gaps
- evidence log

View File

@ -0,0 +1,202 @@
---
name: sales-decision-report
description: Analyze sales data and produce decision-oriented reports with KPI summaries, anomaly explanation, channel and region analysis, and HTML-ready report structure. Use when the user asks for sales analysis, management dashboards, sales summaries, or decision reports from business data. 中文触发词包括销售分析、经营分析、销售周报、销售月报、数据决策报告、HTML 报表。
---
# Sales Decision Report
## Overview
This skill turns sales data into a decision report for operators, managers, and leadership teams.
It focuses on:
- KPI interpretation
- trend and anomaly analysis
- region / channel / product comparisons
- action-oriented reporting
- HTML-ready report structure for later automation
## Quick Start
When the user asks for a sales analysis report:
1. Confirm the business objective
2. Confirm the source data and dimensions
3. Clarify comparison logic and reporting period
4. Analyze the data into findings, not just tables
5. Package the result as a structured report, optionally in HTML outline form
### 中文任务映射
- “做销售周报/销售月报” → quick summary 或 standard report
- “分析一下为什么业绩下滑” → diagnostic analysis
- “生成一个 HTML 报表结构” → HTML report structure
- “做经营分析和行动建议” → decision report
## Input Requirements
| Field | Required | Description |
|-------|----------|-------------|
| business objective | yes | What decision the report should support |
| reporting period | yes | Daily, weekly, monthly, quarterly, custom |
| source data | yes | CSV, XLSX, pasted table, dashboard export |
| dimensions | no | Region, channel, product, team, customer segment |
| target / benchmark | no | Budget, target, last period, YoY, MoM |
| key KPIs | no | Revenue, orders, conversion, AOV, repeat rate, returns |
| audience | no | Sales ops, regional lead, GM, founder |
| output mode | no | Summary / detailed report / HTML-ready outline |
## Workflow Decision Tree
### Quick Sales Summary
Use for short KPI snapshots and headline findings.
### Diagnostic Analysis
Use when the user wants to know why performance moved.
### Decision Report
Use when the user wants recommendations, priorities, and next actions.
### HTML Report Structure
Use when the user explicitly wants an HTML report or a report that will later be automated into HTML.
## Instructions
### Step 1: Frame the question
Clarify whether the report is about:
- performance monitoring
- problem diagnosis
- opportunity finding
- action prioritization
### Step 2: Read the data by level
Review performance across:
- total performance
- time trend
- region
- channel
- product or category
- customer segment
Only include dimensions that materially affect the decision.
### Step 3: Identify meaningful changes
Look for:
- sharp increases or declines
- missed targets
- concentration risks
- outlier regions or channels
- mix shifts
- repeatable strengths
### Step 4: Turn findings into decisions
Each important finding should answer:
- what happened
- where it happened
- likely cause
- what action should follow
### Step 5: Prepare report packaging
If HTML is requested, structure content into sections suitable for cards, tables, and chart blocks.
## Output Format
### Standard Report
```markdown
# [Sales Report Title]
## Executive Summary
[Short performance summary]
## Reporting Scope
- Period:
- Objective:
- Audience:
- Data source:
## KPI Snapshot
| KPI | Current | Target / Comparison | Variance | Comment |
|-----|---------|---------------------|----------|---------|
## Key Findings
### 1. [Finding]
- What happened:
- Why it matters:
- Likely cause:
- Recommended action:
## Risks and Opportunities
- [Item]
## Recommended Actions
1. [Action]
2. [Action]
```
### HTML-ready Outline
```markdown
# HTML Report Structure
## Page Header
- Title
- Period selector
- Summary badges
## Section 1: KPI Cards
- Revenue
- Orders
- Conversion
- Average order value
## Section 2: Trend Analysis
- Time-series highlights
- Major inflection points
## Section 3: Breakdown Views
- Region table
- Channel table
- Product table
## Section 4: Actions
- Priority actions
- Owners or next-step suggestions
```
## Quality Checklist
Before finalizing, verify:
- the report supports a clear business decision
- metrics and benchmarks are labeled correctly
- anomalies are explained, not just surfaced
- recommendations follow logically from findings
- dimensions are not overloaded without purpose
- HTML output is structured, not just prose copied into sections
## Fallback Strategy
If data quality is weak:
- note missing or inconsistent fields
- avoid overconfident conclusions
- provide best-effort observations plus a data cleanup list
## Related Skills
- `skills/xlsx/SKILL.md` - spreadsheet reading, analysis, and output handling
- `skills/financial-report-generator/SKILL.md` - finance-oriented reporting when the task is more financial than sales-oriented
- `skills/auto-daily-summary/SKILL.md` - recurring condensed summary outputs
## Examples
**User**: "根据这份销售表做一个月度经营分析"
Expected output:
- KPI summary
- channel/region findings
- anomalies
- recommended actions
**User**: "帮我生成一个销售分析 HTML 报表结构"
Expected output:
- HTML-ready page outline
- sections for KPI, trends, breakdowns, and action items