1. 修复Paragraph模型构造错误: - 将meta参数改为status_meta - 添加必需的knowledge_id参数 2. 修复使用demo数据的问题: - 移除所有demo数据生成代码 - 改为调用实际的音频处理逻辑 - 通过MediaSplitHandle进行实际处理 3. 增强MediaSplitHandle功能: - 支持实际处理和默认文本两种模式 - 根据use_actual_processing参数选择处理方式 - 保持向后兼容性 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .github | ||
| .idea | ||
| apps | ||
| common/handle/impl/text | ||
| dev | ||
| installer | ||
| ui | ||
| uploads | ||
| .dockerignore | ||
| .env.example | ||
| .env.test | ||
| .gitignore | ||
| CODE_OF_CONDUCT.md | ||
| CONTRIBUTING.md | ||
| LICENSE | ||
| main.py | ||
| MINERU_INTEGRATION_REVIEW.md | ||
| MINERU_STORAGE_README.md | ||
| pyproject.toml | ||
| README_CN.md | ||
| README.md | ||
| rebuild-docker.sh | ||
| SECURITY.md | ||
| start-docker.sh | ||
| test_async_simple.py | ||
| test_celery_recursion_fix.py | ||
| test_celery_tasks.py | ||
| test_config_chain.py | ||
| test_config_simple.py | ||
| test_django_celery_fix.py | ||
| test_image_access.py | ||
| test_maxkb_adapter.py | ||
| test_media_processing.py | ||
| test_mineru_async_fix.py | ||
| test_mineru_real.py | ||
| test_mineru.py | ||
| test_model_config.py | ||
| test_storage_simple.py | ||
| test_storage.py | ||
| test_url_fix.py | ||
| test_url_simple.py | ||
| USE-CASES.md | ||
</Users/moshui/Documents/felo/moshui/MaxKB/apps/common/handle/impl/mineru/Users/moshui/Documents/felo/moshui/MaxKB/apps/common/handle/impl/minerup align="center">
Open-source platform for building enterprise-grade agents
强大易用的企业级智能体平台
MaxKB = Max Knowledge Brain, it is an open-source platform for building enterprise-grade agents. MaxKB integrates Retrieval-Augmented Generation (RAG) pipelines, supports robust workflows, and provides advanced MCP tool-use capabilities. MaxKB is widely applied in scenarios such as intelligent customer service, corporate internal knowledge bases, academic research, and education.
- RAG Pipeline: Supports direct uploading of documents / automatic crawling of online documents, with features for automatic text splitting, vectorization. This effectively reduces hallucinations in large models, providing a superior smart Q&A interaction experience.
- Agentic Workflow: Equipped with a powerful workflow engine, function library and MCP tool-use, enabling the orchestration of AI processes to meet the needs of complex business scenarios.
- Seamless Integration: Facilitates zero-coding rapid integration into third-party business systems, quickly equipping existing systems with intelligent Q&A capabilities to enhance user satisfaction.
- Model-Agnostic: Supports various large models, including private models (such as DeepSeek, Llama, Qwen, etc.) and public models (like OpenAI, Claude, Gemini, etc.).
- Multi Modal: Native support for input and output text, image, audio and video.
Quick start
Execute the script below to start a MaxKB container using Docker:
docker run -d --name=maxkb --restart=always -p 8080:8080 -v ~/.maxkb:/opt/maxkb 1panel/maxkb
Access MaxKB web interface at http://your_server_ip:8080 with default admin credentials:
- username: admin
- password: MaxKB@123..
中国用户如遇到 Docker 镜像 Pull 失败问题,请参照该 离线安装文档 进行安装。
Screenshots
Technical stack
- Frontend:Vue.js
- Backend:Python / Django
- LLM Framework:LangChain
- Database:PostgreSQL + pgvector
Star History
License
Licensed under The GNU General Public License version 3 (GPLv3) (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.gnu.org/licenses/gpl-3.0.html
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.