v3.4.1: Bark 支持 Markdown。提升所有分批估算精确度

This commit is contained in:
sansan 2025-11-29 19:59:33 +08:00
parent f7c424f499
commit 6266751bae
5 changed files with 314 additions and 106 deletions

View File

@ -14,7 +14,7 @@
[![GitHub Stars](https://img.shields.io/github/stars/sansan0/TrendRadar?style=flat-square&logo=github&color=yellow)](https://github.com/sansan0/TrendRadar/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/sansan0/TrendRadar?style=flat-square&logo=github&color=blue)](https://github.com/sansan0/TrendRadar/network/members)
[![License](https://img.shields.io/badge/license-GPL--3.0-blue.svg?style=flat-square)](LICENSE)
[![Version](https://img.shields.io/badge/version-v3.4.0-blue.svg)](https://github.com/sansan0/TrendRadar)
[![Version](https://img.shields.io/badge/version-v3.4.1-blue.svg)](https://github.com/sansan0/TrendRadar)
[![MCP](https://img.shields.io/badge/MCP-v1.0.3-green.svg)](https://github.com/sansan0/TrendRadar)
[![WeWork](https://img.shields.io/badge/WeWork-Notification-00D4AA?style=flat-square)](https://work.weixin.qq.com/)
@ -272,6 +272,40 @@ Transform from "algorithm recommendation captivity" to "actively getting the inf
- **Major Version Upgrade**: Upgrading from v1.x to v2.y, recommend deleting existing fork and re-forking to save effort and avoid config conflicts
### 2025/11/28 - v3.4.1
**🔧 Format Optimization**
1. **Bark Push Enhancement**
- Bark now supports Markdown rendering
- Enabled native Markdown format: bold, links, lists, code blocks, etc.
- Removed plain text conversion to fully utilize Bark's native rendering capabilities
2. **Slack Format Precision**
- Use dedicated mrkdwn format for batch content processing
- Improved byte size estimation accuracy (avoid message overflow)
- Optimized link format: `<url|text>` and bold syntax: `*text*`
3. **Performance Improvement**
- Format conversion completed during batching process, avoiding secondary processing
- Accurate message size estimation reduces send failure rate
**🔧 Upgrade Instructions**:
- **GitHub Fork Users**: Update `main.py``config.yaml`
### 2025/11/26 - mcp-v1.0.3
**MCP Module Update:**
- Added date parsing tool resolve_date_range to resolve AI model date calculation inconsistencies
- Support natural language date expression parsing (this week, last 7 days, last month, etc.)
- Tool count increased from 13 to 14
<details>
<summary>👉 Click to expand: <strong>Historical Updates</strong></summary>
### 2025/11/25 - v3.4.0
**🎉 Added Slack Push Support**
@ -294,17 +328,6 @@ Transform from "algorithm recommendation captivity" to "actively getting the inf
**🔧 Upgrade Instructions**:
- **GitHub Fork Users**: Update `main.py`, `config/config.yaml`, `.github/workflows/crawler.yml`
### 2025/11/26 - mcp-v1.0.3
**MCP Module Update:**
- Added date parsing tool resolve_date_range to resolve AI model date calculation inconsistencies
- Support natural language date expression parsing (this week, last 7 days, last month, etc.)
- Tool count increased from 13 to 14
<details>
<summary>👉 Click to expand: <strong>Historical Updates</strong></summary>
### 2025/11/24 - v3.3.0
@ -1845,7 +1868,7 @@ current directory/
**Usage Method**:
- Modify `.env` file, uncomment and fill in needed configs
- Or add directly in NAS/Synology Docker management interface's "Environment Variables"
- Restart container to take effect: `docker-compose restart`
- Restart container to take effect: `docker-compose up -d`
3. **Start Service**:

View File

@ -14,7 +14,7 @@
[![GitHub Stars](https://img.shields.io/github/stars/sansan0/TrendRadar?style=flat-square&logo=github&color=yellow)](https://github.com/sansan0/TrendRadar/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/sansan0/TrendRadar?style=flat-square&logo=github&color=blue)](https://github.com/sansan0/TrendRadar/network/members)
[![License](https://img.shields.io/badge/license-GPL--3.0-blue.svg?style=flat-square)](LICENSE)
[![Version](https://img.shields.io/badge/version-v3.4.0-blue.svg)](https://github.com/sansan0/TrendRadar)
[![Version](https://img.shields.io/badge/version-v3.4.1-blue.svg)](https://github.com/sansan0/TrendRadar)
[![MCP](https://img.shields.io/badge/MCP-v1.0.3-green.svg)](https://github.com/sansan0/TrendRadar)
[![企业微信通知](https://img.shields.io/badge/企业微信-通知-00D4AA?style=flat-square)](https://work.weixin.qq.com/)
@ -344,6 +344,32 @@ GitHub 一键 Fork 即可使用,无需编程基础。
- 支持自然语言日期表达式解析(本周、最近7天、上月等)
- 工具总数从 13 个增加到 14 个
### 2025/11/28 - v3.4.1
**🔧 格式优化**
1. **Bark 推送增强**
- Bark 现支持 Markdown 渲染
- 启用原生 Markdown 格式:粗体、链接、列表、代码块等
- 移除纯文本转换,充分利用 Bark 原生渲染能力
2. **Slack 格式精准化**
- 使用专用 mrkdwn 格式处理分批内容
- 提升字节大小估算准确性(避免消息超限)
- 优化链接格式:`<url|text>` 和加粗语法:`*text*`
3. **性能提升**
- 格式转换在分批过程中完成,避免二次处理
- 准确估算消息大小,减少发送失败率
**🔧 升级说明**
- **GitHub Fork 用户**:更新 `main.py``config.yaml`
<details>
<summary>👉 点击展开:<strong>历史更新</strong></summary>
### 2025/11/25 - v3.4.0
**🎉 新增 Slack 推送支持**
@ -367,10 +393,6 @@ GitHub 一键 Fork 即可使用,无需编程基础。
- **GitHub Fork 用户**:更新 `main.py`、`config/config.yaml`、`.github/workflows/crawler.yml`
<details>
<summary>👉 点击展开:<strong>历史更新</strong></summary>
### 2025/11/24 - v3.3.0
**🎉 新增 Bark 推送支持**
@ -1893,7 +1915,7 @@ docker run -d --name trend-radar \
**使用方法**
- 修改 `.env` 文件,取消注释并填写需要的配置
- 或在 NAS/群晖 Docker 管理界面的"环境变量"中直接添加
- 重启容器后生效:`docker-compose restart`
- 重启容器后生效:`docker-compose up -d`
3. **启动服务**:

View File

@ -34,8 +34,8 @@ notification:
enable_notification: true # 是否启用通知功能,如果 false则不发送手机通知
message_batch_size: 4000 # 消息分批大小(字节)(这个配置别动)
dingtalk_batch_size: 20000 # 钉钉消息分批大小(字节)(这个配置也别动)
feishu_batch_size: 29000 # 飞书消息分批大小(字节)
bark_batch_size: 3600 # Bark消息分批大小字节
feishu_batch_size: 30000 # 飞书消息分批大小(字节)
bark_batch_size: 4000 # Bark消息分批大小字节
slack_batch_size: 4000 # Slack消息分批大小字节
batch_send_interval: 3 # 批次发送间隔(秒)
feishu_message_separator: "━━━━━━━━━━━━━━━━━━━" # feishu 消息分割线

325
main.py
View File

@ -20,7 +20,7 @@ import requests
import yaml
VERSION = "3.4.0"
VERSION = "3.4.1"
# === SMTP邮件配置 ===
@ -1071,6 +1071,9 @@ def format_rank_display(ranks: List[int], rank_threshold: int, format_type: str)
elif format_type == "telegram":
highlight_start = "<b>"
highlight_end = "</b>"
elif format_type == "slack":
highlight_start = "*"
highlight_end = "*"
else:
highlight_start = "**"
highlight_end = "**"
@ -1576,7 +1579,8 @@ def format_title_for_platform(
return result
elif platform == "wework":
elif platform in ("wework", "bark"):
# WeWork 和 Bark 使用 markdown 格式
if link_url:
formatted_title = f"[{cleaned_title}]({link_url})"
else:
@ -1642,6 +1646,34 @@ def format_title_for_platform(
return result
elif platform == "slack":
# Slack 使用 mrkdwn 格式
if link_url:
# Slack 链接格式: <url|text>
formatted_title = f"<{link_url}|{cleaned_title}>"
else:
formatted_title = cleaned_title
title_prefix = "🆕 " if title_data.get("is_new") else ""
if show_source:
result = f"[{title_data['source_name']}] {title_prefix}{formatted_title}"
else:
result = f"{title_prefix}{formatted_title}"
# 排名(使用 * 加粗)
rank_display = format_rank_display(
title_data["ranks"], title_data["rank_threshold"], "slack"
)
if rank_display:
result += f" {rank_display}"
if title_data["time_display"]:
result += f" `- {title_data['time_display']}`"
if title_data["count"] > 1:
result += f" `({title_data['count']}次)`"
return result
elif platform == "html":
rank_display = format_rank_display(
title_data["ranks"], title_data["rank_threshold"], "html"
@ -2906,6 +2938,90 @@ def render_dingtalk_content(
return text_content
def _get_batch_header(format_type: str, batch_num: int, total_batches: int) -> str:
"""根据 format_type 生成对应格式的批次头部"""
if format_type == "telegram":
return f"<b>[第 {batch_num}/{total_batches} 批次]</b>\n\n"
elif format_type == "slack":
return f"*[第 {batch_num}/{total_batches} 批次]*\n\n"
elif format_type in ("wework_text", "bark"):
# 企业微信文本模式和 Bark 使用纯文本格式
return f"[第 {batch_num}/{total_batches} 批次]\n\n"
else:
# 飞书、钉钉、ntfy、企业微信 markdown 模式
return f"**[第 {batch_num}/{total_batches} 批次]**\n\n"
def _get_max_batch_header_size(format_type: str) -> int:
"""估算批次头部的最大字节数(假设最多 99 批次)
用于在分批时预留空间避免事后截断破坏内容完整性
"""
# 生成最坏情况的头部99/99 批次)
max_header = _get_batch_header(format_type, 99, 99)
return len(max_header.encode("utf-8"))
def _truncate_to_bytes(text: str, max_bytes: int) -> str:
"""安全截断字符串到指定字节数,避免截断多字节字符"""
text_bytes = text.encode("utf-8")
if len(text_bytes) <= max_bytes:
return text
# 截断到指定字节数
truncated = text_bytes[:max_bytes]
# 处理可能的不完整 UTF-8 字符
for i in range(min(4, len(truncated))):
try:
return truncated[: len(truncated) - i].decode("utf-8")
except UnicodeDecodeError:
continue
# 极端情况:返回空字符串
return ""
def add_batch_headers(
batches: List[str], format_type: str, max_bytes: int
) -> List[str]:
"""为批次添加头部,动态计算确保总大小不超过限制
Args:
batches: 原始批次列表
format_type: 推送类型bark, telegram, feishu
max_bytes: 该推送类型的最大字节限制
Returns:
添加头部后的批次列表
"""
if len(batches) <= 1:
return batches
total = len(batches)
result = []
for i, content in enumerate(batches, 1):
# 生成批次头部
header = _get_batch_header(format_type, i, total)
header_size = len(header.encode("utf-8"))
# 动态计算允许的最大内容大小
max_content_size = max_bytes - header_size
content_size = len(content.encode("utf-8"))
# 如果超出,截断到安全大小
if content_size > max_content_size:
print(
f"警告:{format_type}{i}/{total} 批次内容({content_size}字节) + 头部({header_size}字节) 超出限制({max_bytes}字节),截断到 {max_content_size} 字节"
)
content = _truncate_to_bytes(content, max_content_size)
result.append(header + content)
return result
def split_content_into_batches(
report_data: Dict,
format_type: str,
@ -2932,7 +3048,7 @@ def split_content_into_batches(
now = get_beijing_time()
base_header = ""
if format_type == "wework":
if format_type in ("wework", "bark"):
base_header = f"**总新闻数:** {total_titles}\n\n\n\n"
elif format_type == "telegram":
base_header = f"总新闻数: {total_titles}\n\n"
@ -2945,9 +3061,11 @@ def split_content_into_batches(
base_header += f"**时间:** {now.strftime('%Y-%m-%d %H:%M:%S')}\n\n"
base_header += f"**类型:** 热点分析报告\n\n"
base_header += "---\n\n"
elif format_type == "slack":
base_header = f"*总新闻数:* {total_titles}\n\n"
base_footer = ""
if format_type == "wework":
if format_type in ("wework", "bark"):
base_footer = f"\n\n\n> 更新时间:{now.strftime('%Y-%m-%d %H:%M:%S')}"
if update_info:
base_footer += f"\n> TrendRadar 发现新版本 **{update_info['remote_version']}**,当前 **{update_info['current_version']}**"
@ -2967,10 +3085,14 @@ def split_content_into_batches(
base_footer = f"\n\n> 更新时间:{now.strftime('%Y-%m-%d %H:%M:%S')}"
if update_info:
base_footer += f"\n> TrendRadar 发现新版本 **{update_info['remote_version']}**,当前 **{update_info['current_version']}**"
elif format_type == "slack":
base_footer = f"\n\n_更新时间{now.strftime('%Y-%m-%d %H:%M:%S')}_"
if update_info:
base_footer += f"\n_TrendRadar 发现新版本 *{update_info['remote_version']}*,当前 *{update_info['current_version']}_"
stats_header = ""
if report_data["stats"]:
if format_type == "wework":
if format_type in ("wework", "bark"):
stats_header = f"📊 **热点词汇统计**\n\n"
elif format_type == "telegram":
stats_header = f"📊 热点词汇统计\n\n"
@ -2980,6 +3102,8 @@ def split_content_into_batches(
stats_header = f"📊 **热点词汇统计**\n\n"
elif format_type == "dingtalk":
stats_header = f"📊 **热点词汇统计**\n\n"
elif format_type == "slack":
stats_header = f"📊 *热点词汇统计*\n\n"
current_batch = base_header
current_batch_has_content = False
@ -3026,7 +3150,7 @@ def split_content_into_batches(
# 构建词组标题
word_header = ""
if format_type == "wework":
if format_type in ("wework", "bark"):
if count >= 10:
word_header = (
f"🔥 {sequence_display} **{word}** : **{count}** 条\n\n"
@ -3073,12 +3197,23 @@ def split_content_into_batches(
)
else:
word_header = f"📌 {sequence_display} **{word}** : {count}\n\n"
elif format_type == "slack":
if count >= 10:
word_header = (
f"🔥 {sequence_display} *{word}* : *{count}* 条\n\n"
)
elif count >= 5:
word_header = (
f"📈 {sequence_display} *{word}* : *{count}* 条\n\n"
)
else:
word_header = f"📌 {sequence_display} *{word}* : {count}\n\n"
# 构建第一条新闻
first_news_line = ""
if stat["titles"]:
first_title_data = stat["titles"][0]
if format_type == "wework":
if format_type in ("wework", "bark"):
formatted_title = format_title_for_platform(
"wework", first_title_data, show_source=True
)
@ -3098,6 +3233,10 @@ def split_content_into_batches(
formatted_title = format_title_for_platform(
"dingtalk", first_title_data, show_source=True
)
elif format_type == "slack":
formatted_title = format_title_for_platform(
"slack", first_title_data, show_source=True
)
else:
formatted_title = f"{first_title_data['title']}"
@ -3127,7 +3266,7 @@ def split_content_into_batches(
# 处理剩余新闻条目
for j in range(start_index, len(stat["titles"])):
title_data = stat["titles"][j]
if format_type == "wework":
if format_type in ("wework", "bark"):
formatted_title = format_title_for_platform(
"wework", title_data, show_source=True
)
@ -3147,6 +3286,10 @@ def split_content_into_batches(
formatted_title = format_title_for_platform(
"dingtalk", title_data, show_source=True
)
elif format_type == "slack":
formatted_title = format_title_for_platform(
"slack", title_data, show_source=True
)
else:
formatted_title = f"{title_data['title']}"
@ -3170,7 +3313,7 @@ def split_content_into_batches(
# 词组间分隔符
if i < len(report_data["stats"]) - 1:
separator = ""
if format_type == "wework":
if format_type in ("wework", "bark"):
separator = f"\n\n\n\n"
elif format_type == "telegram":
separator = f"\n\n"
@ -3180,6 +3323,8 @@ def split_content_into_batches(
separator = f"\n{CONFIG['FEISHU_MESSAGE_SEPARATOR']}\n\n"
elif format_type == "dingtalk":
separator = f"\n---\n\n"
elif format_type == "slack":
separator = f"\n\n"
test_content = current_batch + separator
if (
@ -3191,7 +3336,7 @@ def split_content_into_batches(
# 处理新增新闻(同样确保来源标题+第一条新闻的原子性)
if report_data["new_titles"]:
new_header = ""
if format_type == "wework":
if format_type in ("wework", "bark"):
new_header = f"\n\n\n\n🆕 **本次新增热点新闻** (共 {report_data['total_new_count']} 条)\n\n"
elif format_type == "telegram":
new_header = (
@ -3203,6 +3348,8 @@ def split_content_into_batches(
new_header = f"\n{CONFIG['FEISHU_MESSAGE_SEPARATOR']}\n\n🆕 **本次新增热点新闻** (共 {report_data['total_new_count']} 条)\n\n"
elif format_type == "dingtalk":
new_header = f"\n---\n\n🆕 **本次新增热点新闻** (共 {report_data['total_new_count']} 条)\n\n"
elif format_type == "slack":
new_header = f"\n\n🆕 *本次新增热点新闻* (共 {report_data['total_new_count']} 条)\n\n"
test_content = current_batch + new_header
if (
@ -3220,7 +3367,7 @@ def split_content_into_batches(
# 逐个处理新增新闻来源
for source_data in report_data["new_titles"]:
source_header = ""
if format_type == "wework":
if format_type in ("wework", "bark"):
source_header = f"**{source_data['source_name']}** ({len(source_data['titles'])} 条):\n\n"
elif format_type == "telegram":
source_header = f"{source_data['source_name']} ({len(source_data['titles'])} 条):\n\n"
@ -3230,6 +3377,8 @@ def split_content_into_batches(
source_header = f"**{source_data['source_name']}** ({len(source_data['titles'])} 条):\n\n"
elif format_type == "dingtalk":
source_header = f"**{source_data['source_name']}** ({len(source_data['titles'])} 条):\n\n"
elif format_type == "slack":
source_header = f"*{source_data['source_name']}* ({len(source_data['titles'])} 条):\n\n"
# 构建第一条新增新闻
first_news_line = ""
@ -3238,7 +3387,7 @@ def split_content_into_batches(
title_data_copy = first_title_data.copy()
title_data_copy["is_new"] = False
if format_type == "wework":
if format_type in ("wework", "bark"):
formatted_title = format_title_for_platform(
"wework", title_data_copy, show_source=False
)
@ -3254,6 +3403,10 @@ def split_content_into_batches(
formatted_title = format_title_for_platform(
"dingtalk", title_data_copy, show_source=False
)
elif format_type == "slack":
formatted_title = format_title_for_platform(
"slack", title_data_copy, show_source=False
)
else:
formatted_title = f"{title_data_copy['title']}"
@ -3299,6 +3452,10 @@ def split_content_into_batches(
formatted_title = format_title_for_platform(
"dingtalk", title_data_copy, show_source=False
)
elif format_type == "slack":
formatted_title = format_title_for_platform(
"slack", title_data_copy, show_source=False
)
else:
formatted_title = f"{title_data_copy['title']}"
@ -3533,14 +3690,20 @@ def send_to_feishu(
proxies = {"http": proxy_url, "https": proxy_url}
# 获取分批内容,使用飞书专用的批次大小
feishu_batch_size = CONFIG.get("FEISHU_BATCH_SIZE", 29000)
# 预留批次头部空间,避免添加头部后超限
header_reserve = _get_max_batch_header_size("feishu")
batches = split_content_into_batches(
report_data,
"feishu",
update_info,
max_bytes=CONFIG.get("FEISHU_BATCH_SIZE", 29000),
max_bytes=feishu_batch_size - header_reserve,
mode=mode,
)
# 统一添加批次头部(已预留空间,不会超限)
batches = add_batch_headers(batches, "feishu", feishu_batch_size)
print(f"飞书消息分为 {len(batches)} 批次发送 [{report_type}]")
# 逐批发送
@ -3550,18 +3713,6 @@ def send_to_feishu(
f"发送飞书第 {i}/{len(batches)} 批次,大小:{batch_size} 字节 [{report_type}]"
)
# 添加批次标识
if len(batches) > 1:
batch_header = f"**[第 {i}/{len(batches)} 批次]**\n\n"
# 将批次标识插入到适当位置(在统计标题之后)
if "📊 **热点词汇统计**" in batch_content:
batch_content = batch_content.replace(
"📊 **热点词汇统计**\n\n", f"📊 **热点词汇统计** {batch_header}"
)
else:
# 如果没有统计标题,直接在开头添加
batch_content = batch_header + batch_content
total_titles = sum(
len(stat["titles"]) for stat in report_data["stats"] if stat["count"] > 0
)
@ -3623,14 +3774,20 @@ def send_to_dingtalk(
proxies = {"http": proxy_url, "https": proxy_url}
# 获取分批内容,使用钉钉专用的批次大小
dingtalk_batch_size = CONFIG.get("DINGTALK_BATCH_SIZE", 20000)
# 预留批次头部空间,避免添加头部后超限
header_reserve = _get_max_batch_header_size("dingtalk")
batches = split_content_into_batches(
report_data,
"dingtalk",
update_info,
max_bytes=CONFIG.get("DINGTALK_BATCH_SIZE", 20000),
max_bytes=dingtalk_batch_size - header_reserve,
mode=mode,
)
# 统一添加批次头部(已预留空间,不会超限)
batches = add_batch_headers(batches, "dingtalk", dingtalk_batch_size)
print(f"钉钉消息分为 {len(batches)} 批次发送 [{report_type}]")
# 逐批发送
@ -3640,18 +3797,6 @@ def send_to_dingtalk(
f"发送钉钉第 {i}/{len(batches)} 批次,大小:{batch_size} 字节 [{report_type}]"
)
# 添加批次标识
if len(batches) > 1:
batch_header = f"**[第 {i}/{len(batches)} 批次]**\n\n"
# 将批次标识插入到适当位置(在标题之后)
if "📊 **热点词汇统计**" in batch_content:
batch_content = batch_content.replace(
"📊 **热点词汇统计**\n\n", f"📊 **热点词汇统计** {batch_header}\n\n"
)
else:
# 如果没有统计标题,直接在开头添加
batch_content = batch_header + batch_content
payload = {
"msgtype": "markdown",
"markdown": {
@ -3756,21 +3901,23 @@ def send_to_wework(
else:
print(f"企业微信使用 markdown 格式(群机器人模式)[{report_type}]")
# 获取分批内容
batches = split_content_into_batches(report_data, "wework", update_info, mode=mode)
# text 模式使用 wework_textmarkdown 模式使用 wework
header_format_type = "wework_text" if is_text_mode else "wework"
# 获取分批内容,预留批次头部空间
wework_batch_size = CONFIG.get("MESSAGE_BATCH_SIZE", 4000)
header_reserve = _get_max_batch_header_size(header_format_type)
batches = split_content_into_batches(
report_data, "wework", update_info, max_bytes=wework_batch_size - header_reserve, mode=mode
)
# 统一添加批次头部(已预留空间,不会超限)
batches = add_batch_headers(batches, header_format_type, wework_batch_size)
print(f"企业微信消息分为 {len(batches)} 批次发送 [{report_type}]")
# 逐批发送
for i, batch_content in enumerate(batches, 1):
# 添加批次标识
if len(batches) > 1:
if is_text_mode:
batch_header = f"[第 {i}/{len(batches)} 批次]\n\n"
else:
batch_header = f"**[第 {i}/{len(batches)} 批次]**\n\n"
batch_content = batch_header + batch_content
# 根据消息类型构建 payload
if is_text_mode:
# text 格式:去除 markdown 语法
@ -3832,11 +3979,16 @@ def send_to_telegram(
if proxy_url:
proxies = {"http": proxy_url, "https": proxy_url}
# 获取分批内容
# 获取分批内容,预留批次头部空间
telegram_batch_size = CONFIG.get("MESSAGE_BATCH_SIZE", 4000)
header_reserve = _get_max_batch_header_size("telegram")
batches = split_content_into_batches(
report_data, "telegram", update_info, mode=mode
report_data, "telegram", update_info, max_bytes=telegram_batch_size - header_reserve, mode=mode
)
# 统一添加批次头部(已预留空间,不会超限)
batches = add_batch_headers(batches, "telegram", telegram_batch_size)
print(f"Telegram消息分为 {len(batches)} 批次发送 [{report_type}]")
# 逐批发送
@ -3846,11 +3998,6 @@ def send_to_telegram(
f"发送Telegram第 {i}/{len(batches)} 批次,大小:{batch_size} 字节 [{report_type}]"
)
# 添加批次标识
if len(batches) > 1:
batch_header = f"<b>[第 {i}/{len(batches)} 批次]</b>\n\n"
batch_content = batch_header + batch_content
payload = {
"chat_id": chat_id,
"text": batch_content,
@ -4069,11 +4216,16 @@ def send_to_ntfy(
if proxy_url:
proxies = {"http": proxy_url, "https": proxy_url}
# 获取分批内容使用ntfy专用的4KB限制
# 获取分批内容使用ntfy专用的4KB限制预留批次头部空间
ntfy_batch_size = 3800
header_reserve = _get_max_batch_header_size("ntfy")
batches = split_content_into_batches(
report_data, "ntfy", update_info, max_bytes=3800, mode=mode
report_data, "ntfy", update_info, max_bytes=ntfy_batch_size - header_reserve, mode=mode
)
# 统一添加批次头部(已预留空间,不会超限)
batches = add_batch_headers(batches, "ntfy", ntfy_batch_size)
total_batches = len(batches)
print(f"ntfy消息分为 {total_batches} 批次发送 [{report_type}]")
@ -4098,11 +4250,9 @@ def send_to_ntfy(
if batch_size > 4096:
print(f"警告ntfy第 {actual_batch_num} 批次消息过大({batch_size} 字节),可能被拒绝")
# 添加批次标识(使用正确的批次编号)
# 更新 headers 的批次标识
current_headers = headers.copy()
if total_batches > 1:
batch_header = f"**[第 {actual_batch_num}/{total_batches} 批次]**\n\n"
batch_content = batch_header + batch_content
current_headers["Title"] = (
f"{report_type_en} ({actual_batch_num}/{total_batches})"
)
@ -4185,16 +4335,35 @@ def send_to_bark(
proxy_url: Optional[str] = None,
mode: str = "daily",
) -> bool:
"""发送到Bark支持分批发送使用纯文本格式)"""
"""发送到Bark支持分批发送使用 markdown 格式)"""
proxies = None
if proxy_url:
proxies = {"http": proxy_url, "https": proxy_url}
# 获取分批内容Bark 限制为 3600 字节以避免 413 错误)
# 解析 Bark URL提取 device_key 和 API 端点
# Bark URL 格式: https://api.day.app/device_key 或 https://bark.day.app/device_key
from urllib.parse import urlparse
parsed_url = urlparse(bark_url)
device_key = parsed_url.path.strip('/').split('/')[0] if parsed_url.path else None
if not device_key:
print(f"Bark URL 格式错误,无法提取 device_key: {bark_url}")
return False
# 构建正确的 API 端点
api_endpoint = f"{parsed_url.scheme}://{parsed_url.netloc}/push"
# 获取分批内容Bark 限制为 3600 字节以避免 413 错误),预留批次头部空间
bark_batch_size = CONFIG["BARK_BATCH_SIZE"]
header_reserve = _get_max_batch_header_size("bark")
batches = split_content_into_batches(
report_data, "wework", update_info, max_bytes=CONFIG["BARK_BATCH_SIZE"], mode=mode
report_data, "bark", update_info, max_bytes=bark_batch_size - header_reserve, mode=mode
)
# 统一添加批次头部(已预留空间,不会超限)
batches = add_batch_headers(batches, "bark", bark_batch_size)
total_batches = len(batches)
print(f"Bark消息分为 {total_batches} 批次发送 [{report_type}]")
@ -4210,15 +4379,7 @@ def send_to_bark(
# 计算正确的批次编号(用户视角的编号)
actual_batch_num = total_batches - idx + 1
# 添加批次标识(使用正确的批次编号)
if total_batches > 1:
batch_header = f"[第 {actual_batch_num}/{total_batches} 批次]\n\n"
batch_content = batch_header + batch_content
# 清理 markdown 语法Bark 不支持 markdown
plain_content = strip_markdown(batch_content)
batch_size = len(plain_content.encode("utf-8"))
batch_size = len(batch_content.encode("utf-8"))
print(
f"发送Bark第 {actual_batch_num}/{total_batches} 批次(推送顺序: {idx}/{total_batches}),大小:{batch_size} 字节 [{report_type}]"
)
@ -4232,14 +4393,16 @@ def send_to_bark(
# 构建JSON payload
payload = {
"title": report_type,
"body": plain_content,
"markdown": batch_content,
"device_key": device_key,
"sound": "default",
"group": "TrendRadar",
"action": "none", # 点击推送跳到 APP 不弹出弹框,方便阅读
}
try:
response = requests.post(
bark_url,
api_endpoint,
json=payload,
proxies=proxies,
timeout=30,
@ -4319,20 +4482,20 @@ def send_to_slack(
if proxy_url:
proxies = {"http": proxy_url, "https": proxy_url}
# 获取分批内容(使用 Slack 批次大小)
# 获取分批内容(使用 Slack 批次大小),预留批次头部空间
slack_batch_size = CONFIG["SLACK_BATCH_SIZE"]
header_reserve = _get_max_batch_header_size("slack")
batches = split_content_into_batches(
report_data, "wework", update_info, max_bytes=CONFIG["SLACK_BATCH_SIZE"], mode=mode
report_data, "slack", update_info, max_bytes=slack_batch_size - header_reserve, mode=mode
)
# 统一添加批次头部(已预留空间,不会超限)
batches = add_batch_headers(batches, "slack", slack_batch_size)
print(f"Slack消息分为 {len(batches)} 批次发送 [{report_type}]")
# 逐批发送
for i, batch_content in enumerate(batches, 1):
# 添加批次标识
if len(batches) > 1:
batch_header = f"*[第 {i}/{len(batches)} 批次]*\n\n"
batch_content = batch_header + batch_content
# 转换 Markdown 到 mrkdwn 格式
mrkdwn_content = convert_markdown_to_mrkdwn(batch_content)

View File

@ -1 +1 @@
3.4.0
3.4.1