Initial commit: 招标信息爬虫与分析系统
This commit is contained in:
45
.gitignore
vendored
Normal file
45
.gitignore
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# Data files
|
||||
*.csv
|
||||
*.json
|
||||
full_content.txt
|
||||
preamble_content.txt
|
||||
|
||||
# Logs
|
||||
logs/
|
||||
*.log
|
||||
|
||||
# Temporary files
|
||||
temp_files/
|
||||
|
||||
# Attachments (large files)
|
||||
data/attachments/
|
||||
|
||||
# Environment
|
||||
.env
|
||||
venv/
|
||||
env/
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS
|
||||
Thumbs.db
|
||||
.DS_Store
|
||||
|
||||
# Testing
|
||||
.pytest_cache/
|
||||
coverage.xml
|
||||
|
||||
# Build
|
||||
build/
|
||||
dist/
|
||||
*.egg-info/
|
||||
194
README.md
Normal file
194
README.md
Normal file
@@ -0,0 +1,194 @@
|
||||
# 公共资源交易中心爬虫 + AI 处理系统
|
||||
|
||||
自动采集浙江省/台州市公共资源交易中心的招标信息,经 DeepSeek AI 提取结构化字段后,上传至简道云表单。
|
||||
|
||||
```
|
||||
爬虫采集 → 字段映射 → 内容获取(页面+附件) → DeepSeek AI 提取 → 简道云上传
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 支持的管道
|
||||
|
||||
| 站点 | 公告类型 | AI 提取字段 | 简道云表单 |
|
||||
|------|----------|------------|------------|
|
||||
| 浙江省 | 招标文件公示 | 类型、地区、投标截止日、最高投标限价、最高限价、资质要求、业绩要求、评标办法、评分说明与资信评分标准、有无答辩、招标人、项目概况、造价付款方式 | 浙江招标文件公示 |
|
||||
| 浙江省 | 招标公告 | 批准文号、投标截止日 | 浙江招标公告 |
|
||||
| 浙江省 | 澄清修改 | 批准文号 | 浙江澄清修改 |
|
||||
| 台州市 | 招标计划公示 | 预估金额、类型、批准文号 | 台州招标计划 |
|
||||
| 台州市 | 招标公告 | 批准文号、投标截止日 | —(暂无表单) |
|
||||
|
||||
---
|
||||
|
||||
## 快速开始
|
||||
|
||||
### 安装依赖
|
||||
|
||||
```bash
|
||||
pip install requests beautifulsoup4 pdfplumber python-docx
|
||||
```
|
||||
|
||||
### 单次运行
|
||||
|
||||
```bash
|
||||
# 仅爬取
|
||||
python main.py -s zhejiang -c 工程建设 -t 招标文件公示 -p 5 -d yesterday
|
||||
|
||||
# 爬取 + AI 处理
|
||||
python main.py -s zhejiang -c 工程建设 -t 招标文件公示 -p 5 -d yesterday -P
|
||||
|
||||
# 爬取 + AI 处理 + 上传简道云
|
||||
python main.py -s zhejiang -c 工程建设 -t 招标文件公示 -p 5 -d yesterday -P -U
|
||||
|
||||
# 台州招标计划
|
||||
python main.py -s taizhou -c 工程建设 -t 招标计划公示 -p 3 -d yesterday -P -U
|
||||
|
||||
# 全部站点
|
||||
python main.py -s all -c 工程建设 -t 招标公告 -p 1 -d yesterday -P
|
||||
```
|
||||
|
||||
### 参数说明
|
||||
|
||||
| 参数 | 说明 | 示例 |
|
||||
|------|------|------|
|
||||
| `-s` | 站点 | `zhejiang` / `taizhou` / `all` |
|
||||
| `-p` | 爬取页数 | `5`(默认 5) |
|
||||
| `-c` | 交易领域 | `工程建设` / `政府采购` |
|
||||
| `-t` | 公告类型 | `招标文件公示` / `招标公告` / `澄清修改` / `招标计划公示` |
|
||||
| `-d` | 日期过滤 | `yesterday` / `2026-02-10` |
|
||||
| `-a` | 下载附件 | 开关 |
|
||||
| `-P` | 启用 AI 处理 | 开关,需配合 `-t` |
|
||||
| `-U` | 上传简道云 | 开关,需配合 `-P` |
|
||||
|
||||
### 定时任务
|
||||
|
||||
```bash
|
||||
# 直接运行(采集昨天全部任务)
|
||||
python scheduler.py
|
||||
|
||||
# Windows 计划任务(每天 08:00)
|
||||
schtasks /create /tn "ZTB_Spider" /tr "python C:\path\to\ztb\scheduler.py" /sc daily /st 08:00
|
||||
```
|
||||
|
||||
`scheduler.py` 中的 `DAILY_TASKS` 定义每天自动执行的任务,当前配置:
|
||||
- 浙江 招标文件公示(20 页 + AI + 上传)
|
||||
- 台州 招标计划公示(7 页 + AI + 上传)
|
||||
|
||||
---
|
||||
|
||||
## 项目结构
|
||||
|
||||
```
|
||||
ztb/
|
||||
├── main.py # 命令行入口
|
||||
├── scheduler.py # 定时任务入口
|
||||
├── config.py # 全局配置(站点、AI、简道云)
|
||||
├── spiders/
|
||||
│ ├── base.py # 爬虫基类(限速、重试、熔断)
|
||||
│ ├── zhejiang.py # 浙江省爬虫
|
||||
│ └── taizhou.py # 台州市爬虫
|
||||
├── processors/
|
||||
│ ├── pipeline.py # 处理管道(串联全流程)
|
||||
│ ├── content_fetcher.py # 页面 + 附件内容获取
|
||||
│ ├── deepseek.py # DeepSeek AI 字段提取
|
||||
│ └── jiandaoyun.py # 简道云上传
|
||||
├── utils/
|
||||
│ └── attachment.py # 附件下载工具
|
||||
├── data/ # 输出(CSV + JSON)
|
||||
├── logs/ # 日志
|
||||
└── temp_files/ # 临时附件(自动清理)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 数据流
|
||||
|
||||
### 1. 爬虫输出
|
||||
|
||||
**浙江省**:标题、发布日期、地区、公告类型、链接、来源 + 项目名称、项目代码、招标人、招标代理、联系电话、招标估算金额
|
||||
|
||||
**台州市**:标题、发布日期、地区、链接、来源 + 项目名称、招标人、项目批准文号、项目类型、计划招标时间、预估合同金额(万元)
|
||||
|
||||
### 2. 字段映射(pipeline._map_fields)
|
||||
|
||||
```
|
||||
标题 → 名称
|
||||
发布日期 → 发布时间 + 项目发布时间
|
||||
链接 → 招标文件链接 / 公告链接 / 数据源链接 / 澄清文件链接
|
||||
公告类型 → 招标阶段
|
||||
项目批准文号 → 批准文号
|
||||
项目类型 → 类型
|
||||
预估合同金额 → 预估金额(自动补"万元")
|
||||
计划招标时间 → 招标时间
|
||||
```
|
||||
|
||||
### 3. AI 提取 → 合并
|
||||
|
||||
- AI 值有效(非"文档未提及")→ **覆盖**原值
|
||||
- AI 返回"文档未提及" → **保留**爬虫原值
|
||||
|
||||
### 4. 输出文件
|
||||
|
||||
- CSV:`data/浙江省公共资源交易中心_20260211_092500.csv`
|
||||
- JSON:`data/浙江招标文件公示_AI处理_20260211_093446.json`
|
||||
|
||||
---
|
||||
|
||||
## 安全机制
|
||||
|
||||
### 爬虫层(BaseSpider)
|
||||
|
||||
| 机制 | 配置值 | 说明 |
|
||||
|------|--------|------|
|
||||
| 请求速率 | 10 次/分钟 | 超出自动等待 |
|
||||
| 列表页延迟 | 3–6 秒 | 随机间隔 |
|
||||
| 详情页延迟 | 2–5 秒 | 随机间隔 |
|
||||
| 最大请求数 | 300 次/运行 | 超出停止 |
|
||||
| 连续失败熔断 | 5 次 | 触发后停止 |
|
||||
| 空响应检测 | ≤10 bytes | 反爬拦截后指数退避重试 |
|
||||
| 优雅退出 | Ctrl+C | 保存已采集数据后退出 |
|
||||
|
||||
### AI 处理层(ContentFetcher)
|
||||
|
||||
| 机制 | 配置值 | 说明 |
|
||||
|------|--------|------|
|
||||
| 请求速率 | 12 次/分钟 | 独立限速 |
|
||||
| 请求延迟 | 1.5–3 秒 | 每次请求前随机等待 |
|
||||
| 附件大小限制 | 50 MB | 超出跳过 |
|
||||
| 内容长度限制 | 120,000 字符 | 超长内容智能截取关键段落 |
|
||||
| API 失败回退 | 本地正则 | DeepSeek 不可用时降级提取 |
|
||||
| 临时文件清理 | 自动 | 解析后立即删除 |
|
||||
|
||||
---
|
||||
|
||||
## 配置说明
|
||||
|
||||
所有配置集中在 `config.py`:
|
||||
|
||||
- `SPIDER_CONFIG` — 爬虫延迟、重试、限速
|
||||
- `DEEPSEEK_API_KEY` — DeepSeek API 密钥
|
||||
- `PROCESSING_CONFIG` — AI 处理超时、内容长度限制
|
||||
- `REGION_CONFIGS` — 每个管道的 AI 字段定义
|
||||
- `DEEPSEEK_PROMPTS` — 15 个字段的提示词模板
|
||||
- `JDY_CONFIG` — 简道云表单 ID 和字段映射
|
||||
|
||||
### 添加新管道
|
||||
|
||||
1. 在 `REGION_CONFIGS` 中添加 `"site:notice_type"` 条目
|
||||
2. 如需新的 AI 字段,在 `DEEPSEEK_PROMPTS` 中添加提示词
|
||||
3. 如需上传,在 `JDY_CONFIG["forms"]` 中添加表单配置
|
||||
4. 可选:在 `scheduler.py` 的 `DAILY_TASKS` 中添加定时任务
|
||||
|
||||
---
|
||||
|
||||
## 测试记录(2026-02-11)
|
||||
|
||||
5 个管道全部通过,71 条记录 AI 处理成功率 100%:
|
||||
|
||||
| 管道 | 爬取 | AI 成功 | 耗时 |
|
||||
|------|------|---------|------|
|
||||
| 台州 招标计划公示 | 1 条 | 1/1 | ~12 秒 |
|
||||
| 浙江 招标文件公示 | 20 条 | 20/20 | ~10 分钟 |
|
||||
| 浙江 招标公告 | 20 条 | 20/20 | ~3 分钟 |
|
||||
| 浙江 澄清修改 | 20 条 | 20/20 | ~2 分钟 |
|
||||
| 台州 招标公告 | 10 条 | 10/10 | ~2 分钟 |
|
||||
87
analyze_preamble.py
Normal file
87
analyze_preamble.py
Normal file
@@ -0,0 +1,87 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
分析投标人须知前附表的内容格式,以便优化提示词
|
||||
"""
|
||||
import logging
|
||||
from processors.content_fetcher import ContentFetcher
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 测试网址
|
||||
TEST_URL = "https://ggzy.zj.gov.cn/jyxxgk/002001/002001011/20260212/9a7966d8-80f4-475b-897e-f7631bc64d0c.html"
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info(f"开始分析: {TEST_URL}")
|
||||
|
||||
# 获取内容
|
||||
fetcher = ContentFetcher(temp_dir="temp_files")
|
||||
content = fetcher.get_full_content(TEST_URL)
|
||||
|
||||
if not content:
|
||||
logger.error("无法获取网页内容")
|
||||
return
|
||||
|
||||
# 查找投标人须知前附表
|
||||
if "投标人须知前附表" in content:
|
||||
logger.info("找到投标人须知前附表")
|
||||
|
||||
# 提取前附表内容
|
||||
start_idx = content.find("投标人须知前附表")
|
||||
# 查找前附表结束位置(通常是下一个主要章节)
|
||||
end_markers = ["1. 总则", "投标人须知", "第一章", "第二章"]
|
||||
end_idx = len(content)
|
||||
|
||||
for marker in end_markers:
|
||||
marker_idx = content.find(marker, start_idx + 100)
|
||||
if marker_idx > start_idx:
|
||||
end_idx = min(end_idx, marker_idx)
|
||||
|
||||
preamble_content = content[start_idx:end_idx]
|
||||
logger.info(f"前附表内容长度: {len(preamble_content)} 字符")
|
||||
|
||||
# 保存前附表内容到文件
|
||||
with open("preamble_content.txt", "w", encoding="utf-8") as f:
|
||||
f.write(preamble_content)
|
||||
logger.info("前附表内容已保存到 preamble_content.txt")
|
||||
|
||||
# 分析前附表中的资质要求和业绩要求
|
||||
logger.info("\n分析前附表中的关键信息:")
|
||||
|
||||
# 查找资质要求
|
||||
if "资质要求" in preamble_content:
|
||||
logger.info("前附表中包含资质要求")
|
||||
# 提取资质要求上下文
|
||||
qual_start = preamble_content.find("资质要求")
|
||||
qual_end = preamble_content.find("\n", qual_start + 10)
|
||||
if qual_end > qual_start:
|
||||
logger.info(f"资质要求上下文: {preamble_content[qual_start:qual_end]}")
|
||||
else:
|
||||
logger.warning("前附表中未找到资质要求")
|
||||
|
||||
# 查找业绩要求
|
||||
if "业绩要求" in preamble_content:
|
||||
logger.info("前附表中包含业绩要求")
|
||||
# 提取业绩要求上下文
|
||||
perf_start = preamble_content.find("业绩要求")
|
||||
perf_end = preamble_content.find("\n", perf_start + 10)
|
||||
if perf_end > perf_start:
|
||||
logger.info(f"业绩要求上下文: {preamble_content[perf_start:perf_end]}")
|
||||
else:
|
||||
logger.warning("前附表中未找到业绩要求")
|
||||
|
||||
# 查找其他可能的关键词
|
||||
keywords = ["资格要求", "企业资质", "施工总承包", "类似工程业绩"]
|
||||
for keyword in keywords:
|
||||
if keyword in preamble_content:
|
||||
logger.info(f"前附表中包含: {keyword}")
|
||||
else:
|
||||
logger.warning("未找到投标人须知前附表")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
458
config.py
Normal file
458
config.py
Normal file
@@ -0,0 +1,458 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
爬虫配置文件
|
||||
"""
|
||||
|
||||
# 浙江省公共资源交易中心
|
||||
ZHEJIANG_CONFIG = {
|
||||
"name": "浙江省公共资源交易中心",
|
||||
"base_url": "https://ggzy.zj.gov.cn",
|
||||
"api_url": "https://ggzy.zj.gov.cn/inteligentsearch/rest/esinteligentsearch/getFullTextDataNew",
|
||||
# 交易领域代码
|
||||
"categories": {
|
||||
"工程建设": "002001",
|
||||
"政府采购": "002002",
|
||||
"土地使用权": "002003",
|
||||
"国有产权": "002004",
|
||||
"矿业权": "002006",
|
||||
"其他交易": "002007",
|
||||
},
|
||||
# 公告类型代码(工程建设)
|
||||
"notice_types": {
|
||||
"项目登记信息": "002001008",
|
||||
"招标计划": "002001013",
|
||||
"招标文件公示": "002001011",
|
||||
"招标公告": "002001001",
|
||||
"资格预审公告": "002001002",
|
||||
"澄清修改": "002001006",
|
||||
"资格预审结果": "002001007",
|
||||
"开标结果公示": "002001003",
|
||||
"中标候选人公示": "002001004",
|
||||
"中标结果公告": "002001005",
|
||||
"合同信息公开": "002001009",
|
||||
}
|
||||
}
|
||||
|
||||
# 台州公共资源交易中心
|
||||
TAIZHOU_CONFIG = {
|
||||
"name": "台州公共资源交易中心",
|
||||
"base_url": "https://ggzy.tzztb.zjtz.gov.cn",
|
||||
"api_url": "https://ggzy.tzztb.zjtz.gov.cn/rest/secaction/getSecInfoListYzm",
|
||||
"site_guid": "7eb5f7f1-9041-43ad-8e13-8fcb82ea831a",
|
||||
# 交易领域
|
||||
"categories": {
|
||||
"工程建设": "002001",
|
||||
"政府采购": "002002",
|
||||
"产权交易": "002003",
|
||||
"拓展资源": "002004",
|
||||
"土地交易": "002005",
|
||||
},
|
||||
# 公告类型(工程建设)
|
||||
"notice_types": {
|
||||
"招标计划公示": "002001014",
|
||||
"招标文件公示": "002001001",
|
||||
"招标公告": "002001002",
|
||||
"资格预审公告": "002001003",
|
||||
"中标候选人公示": "002001005",
|
||||
"中标结果公告": "002001006",
|
||||
"保证金退还公示": "002001008",
|
||||
}
|
||||
}
|
||||
|
||||
# 爬虫设置(云端部署安全参数)
|
||||
SPIDER_CONFIG = {
|
||||
"page_size": 20, # 每页数量
|
||||
"max_pages": 10, # 最大爬取页数
|
||||
# --- 延迟控制 ---
|
||||
"delay_min": 3, # 列表页最小延迟(秒)
|
||||
"delay_max": 6, # 列表页最大延迟(秒)
|
||||
"detail_delay_min": 2, # 详情页最小延迟(秒)
|
||||
"detail_delay_max": 5, # 详情页最大延迟(秒)
|
||||
# --- 安全阈值 ---
|
||||
"timeout": 30, # HTTP 超时时间(秒)
|
||||
"max_retries": 3, # 单次请求最大重试次数
|
||||
"max_consecutive_errors": 5, # 连续失败熔断阈值(降低=更早停止)
|
||||
"max_total_requests": 300, # 单次运行最大请求数
|
||||
"requests_per_minute": 10, # 每分钟最大请求数(超出自动减速)
|
||||
}
|
||||
|
||||
# 数据存储路径
|
||||
DATA_DIR = "data"
|
||||
|
||||
# ============ DeepSeek AI 处理配置 ============
|
||||
|
||||
DEEPSEEK_API_KEY = "sk-7b7211ee80b84000beebfa74a599ba13"
|
||||
|
||||
PROCESSING_CONFIG = {
|
||||
"temp_dir": "temp_files", # 临时文件目录
|
||||
"output_dir": "data", # AI处理结果输出目录
|
||||
"request_timeout": 90, # DeepSeek API 超时(秒)
|
||||
"max_content_length": 120000, # 发送给 DeepSeek 的最大内容长度
|
||||
"api_delay": 1, # API 调用间隔(秒)
|
||||
}
|
||||
|
||||
# 区域配置:按「站点+公告类型」定义需要AI提取的字段
|
||||
REGION_CONFIGS = {
|
||||
# key 格式: "site:notice_type"
|
||||
"zhejiang:招标文件公示": {
|
||||
"region_name": "浙江招标文件公示",
|
||||
"link_field": "招标文件链接", # 爬虫"链接"映射到的字段名
|
||||
"ai_fields": [
|
||||
"类型", "地区", "投标截止日", "最高投标限价", "最高限价",
|
||||
"资质要求", "业绩要求", "评标办法", "评分说明与资信评分标准",
|
||||
"有无答辩", "招标人", "项目概况", "造价付款方式", "批准文号",
|
||||
],
|
||||
},
|
||||
"zhejiang:招标公告": {
|
||||
"region_name": "浙江招标公告",
|
||||
"link_field": "公告链接",
|
||||
"ai_fields": ["批准文号", "投标截止日"],
|
||||
},
|
||||
"zhejiang:澄清修改": {
|
||||
"region_name": "浙江澄清修改",
|
||||
"link_field": "澄清文件链接",
|
||||
"ai_fields": ["批准文号"],
|
||||
},
|
||||
"zhejiang:招标计划": {
|
||||
"region_name": "浙江招标计划",
|
||||
"link_field": "公告链接",
|
||||
"ai_fields": ["批准文号", "类型", "地区", "招标时间"],
|
||||
},
|
||||
"taizhou:招标文件公示": {
|
||||
"region_name": "台州招标文件公示",
|
||||
"link_field": "招标文件链接",
|
||||
"ai_fields": [
|
||||
"类型", "地区", "批准文号", "投标截止日", "预估金额",
|
||||
],
|
||||
},
|
||||
"taizhou:招标计划公示": {
|
||||
"region_name": "台州招标计划",
|
||||
"link_field": "公告链接",
|
||||
"ai_fields": ["批准文号", "类型", "地区", "招标时间"],
|
||||
},
|
||||
}
|
||||
|
||||
# DeepSeek 提示词模板
|
||||
DEEPSEEK_PROMPTS = {
|
||||
"批准文号": """请从招标公告中提取项目批准文号。
|
||||
|
||||
批准文号的常见格式:
|
||||
- 台建招备[2026]XXX号
|
||||
- 浙建计[2026]XXX号
|
||||
- 2302-XXXXXX-XX-XX-XXXXXX(项目代码格式)
|
||||
|
||||
查找关键词:批准文号、备案登记号、项目代码、项目编号、招标编号
|
||||
|
||||
请直接返回批准文号,不要其他解释。
|
||||
如果未找到,请返回"文档未提及"。""",
|
||||
|
||||
"资质要求": """从招标文件中提取企业资质等级。
|
||||
|
||||
搜索策略:
|
||||
1. 直接查找:资质要求、资质条件、资质等级、施工总承包、专业承包
|
||||
2. 查找章节:投标人须知前附表、招标公告、资格审查条件
|
||||
3. 特别注意:必须检查PDF附件中的内容,附件中通常包含详细的资质要求
|
||||
4. 如果写"见投标人须知前附表"或类似引用,请必须查找并提取前附表中的具体资质要求
|
||||
5. 如果前附表写"见招标公告",请在招标公告章节查找
|
||||
|
||||
重要:只返回资质类型和等级,不要任何其他内容!
|
||||
|
||||
正确格式示例:
|
||||
建筑工程施工总承包三级及以上
|
||||
市政公用工程施工总承包二级及以上
|
||||
|
||||
返回规则:
|
||||
- 找到具体资质等级 → 返回资质等级
|
||||
- 文档写"见招标公告"但招标公告在平台上 → 返回"详见招标公告"
|
||||
- 确实未找到任何相关信息 → 返回"文档未提及""",
|
||||
|
||||
|
||||
"业绩要求": """请从招标文件中提取投标人业绩要求。
|
||||
|
||||
搜索策略:
|
||||
1. 重点查找:投标人资格要求、业绩要求、投标人须知前附表、招标公告、评分标准
|
||||
2. 特别注意:必须检查PDF附件中的内容,附件中通常包含详细的业绩要求
|
||||
3. 关注关键词:业绩、工程经验、类似项目、同类工程、中标业绩、类似工程业绩
|
||||
4. 注意时间范围要求:近X年、自20XX年以来
|
||||
5. 特别注意:如果文档中提到"见投标人须知前附表"或类似引用,请查找并提取前附表中的业绩要求
|
||||
6. 如果业绩要求在评分标准中,请从评分标准中提取
|
||||
|
||||
必须提取的内容:
|
||||
- 业绩的时间范围要求
|
||||
- 业绩的具体要求(工程类型、规模、金额等)
|
||||
- 业绩数量要求
|
||||
- 项目负责人业绩要求(如有)
|
||||
|
||||
返回规则:
|
||||
- 找到具体业绩要求 → 返回业绩要求内容
|
||||
- 文档写"见招标公告"但招标公告在平台上 → 返回"详见招标公告"
|
||||
- 确实未找到 → 返回"文档未提及""",
|
||||
|
||||
|
||||
"评标办法": """请分析招标文件,判断采用的评标办法。
|
||||
|
||||
■ 必须检查以下内容:
|
||||
1. 投标人须知前附表中的勾选项(☑/□)
|
||||
2. "第三章 评标办法"的章节标题和具体内容
|
||||
3. 附件中的评标定标章节(如"评标定标办法"、"评标细则"等)
|
||||
4. 其他相关章节中关于评标方法的描述
|
||||
|
||||
■ 分析要点:
|
||||
- 仔细阅读附件中评标定标章节的详细内容
|
||||
- 关注评标方法的具体定义和操作流程
|
||||
- 确认是否采用评定分离方式
|
||||
- 区分综合评估法、经评审的最低投标价法等不同类型
|
||||
|
||||
■ 输出规则(评定分离优先):
|
||||
- 如果文档中出现"☑采用评定分离"或"评定分离方式招标"或附件中明确说明采用评定分离→ 返回"评定分离"
|
||||
- 综合评估法(含所有子类型)、资信商务评估法、合理低价法 → 返回"综合评估法"
|
||||
- 经评审的最低投标价法 → 返回"经评审的最低投标价法"
|
||||
|
||||
■ 只能返回以下值之一:
|
||||
1. 评定分离
|
||||
2. 综合评估法
|
||||
3. 经评审的最低投标价法
|
||||
4. 文档未提及
|
||||
|
||||
只返回上述值之一,不要任何其他文字。""",
|
||||
|
||||
"评分说明与资信评分标准": """请从招标文件中提取评分说明和资信评分标准。
|
||||
|
||||
■ 核心原则:
|
||||
- 只提取文档中明确存在的具体评分规则,严禁推测或编造
|
||||
- 如果只有章节标题但没有具体评分细则,必须返回"文档未提及"
|
||||
|
||||
■ 搜索策略:
|
||||
1. 全面查找:评分说明、评分标准、评分办法、评标细则、评标办法、评审办法
|
||||
2. 关注章节:第三章 评标办法、评标定标办法、评分标准、商务标评审、技术标评审、资信标评审
|
||||
3. 关键词扩展:评分、基准价、商务标、技术标、资信标、分值、得分、权重、评分办法、评标基准价、报价得分
|
||||
4. 特别关注:信用评价、信用等级、信用分、信誉分、诚信分、企业信用、项目负责人信用
|
||||
|
||||
■ 必须提取的内容:
|
||||
1. 总体评分结构(各部分分值分配)
|
||||
- 必须在最前面总结总分结构:总分 =资信标 X分+技术标 X分+商务标 X分
|
||||
- 确保分值总和为100分
|
||||
2. 基准价计算方法
|
||||
3. 信用分详细细则(包括企业和项目负责人):
|
||||
- 信用等级划分标准(如A/B/C/D/E级对应的具体分数范围,如110分以上(含110分)、105-110分(含105分)等)
|
||||
- 各等级对应的具体得分(如A级3分、B级2.5分等)
|
||||
- 未取得信用评价的得分
|
||||
- 特别关注项目负责人信用评价分的等级和分数要求
|
||||
- 必须提取完整的分数范围和对应分数,如:A类(110-120分2.8分、120-130分2.85分)
|
||||
- 必须提取完整的等级划分,如:A级:110分以上(含110分)、B级:105-110分(含105分)、C级:100-105分(含100分)、D级:90-100分(含90分)、E级:90分以下
|
||||
- 必须提取附件中的信用等级划分标准,如《台州市住房和城乡建设局关于公布建筑工程和市政公用工程企业信用等级划分标准的通知》中的等级划分
|
||||
- 必须提取具体的分数阈值,如110分以上(含110分)、105-110分(含105分)等
|
||||
|
||||
■ 示例输出:
|
||||
总分=资信标15分+技术标65分+商务标20分;评标基准价=最高限价×K值(K=80%-95%);商务标20分(报价得分采用线性插值法);技术标65分(打分制);资信标15分(其中企业信用分:A级110分以上3分,B级105-110分2.5分,C级100-105分2分,D级90-100分1.5分,E级90分以下1分,未取得0.5分;项目负责人信用分:A级110分以上2分,B级105-110分1.5分,C级100-105分1分,D级90-100分0.5分,E级90分以下0.3分,未取得0.1分)。
|
||||
|
||||
■ 信用分提取示例:
|
||||
投标人信用评价分:A类(110-120分2.8分、120-130分2.85分、130-140分2.9分、140-150分3分),B类(105-106分2.55分、106-107分2.6分、107-108分2.65分、108-109分2.7分、109-110分2.75分),C类2.3分,D类1.8分,E类1.3分,未取得0.8分。
|
||||
|
||||
■ 返回规则:
|
||||
- 找到具体评分规则 → 用简洁语言总结,信用分部分需详细列出
|
||||
- 文档中只有章节目录,无具体内容 → 返回"文档未提及"
|
||||
- 无法确定 → 返回"文档未提及"(严禁编造)""",
|
||||
|
||||
"有无答辩": """请判断招标文件中是否要求"现场答辩"或"现场面试"。
|
||||
关键词:答辩、面试、现场汇报、演示
|
||||
|
||||
如果明确要求答辩/面试,请返回"有";
|
||||
如果明确说明不需要,请返回"无";
|
||||
如果未提及,请返回"无"。""",
|
||||
|
||||
"项目概况": """请从招标文件中提取项目概况信息。
|
||||
|
||||
查找章节:项目概况、工程概况、建设规模、招标范围
|
||||
|
||||
必须提取:
|
||||
1. 建设地点
|
||||
2. 建设规模(长度×宽度、面积、层数等)
|
||||
3. 招标范围
|
||||
4. 计划工期
|
||||
5. 质量要求
|
||||
|
||||
请按以下格式输出:
|
||||
建设地点:XX;建设规模:XX;招标范围:XX;计划工期:≤XX日历天;质量要求:XX
|
||||
|
||||
如果未找到,请返回"文档未提及"。""",
|
||||
|
||||
"类型": """请根据项目信息判断项目类型。
|
||||
|
||||
只返回以下类型之一:
|
||||
施工类(需细分):总承包、市政、安装、装饰、公路、水利、电力
|
||||
其他类型:勘察、设计、监理、采购、咨询、其他
|
||||
|
||||
判断规则:
|
||||
1. 名称含"设计"→设计,"监理"→监理,"勘察/测量"→勘察,"EPC/总承包"→总承包
|
||||
2. 施工类细分:道路/桥梁/排水/管网→市政,公路/国道→公路,装修/幕墙→装饰,机电/电气→安装,房屋/学校/医院→总承包,水利/河道/水库→水利,电力/变电/输电→电力
|
||||
|
||||
只返回类型名称,不要其他解释。""",
|
||||
|
||||
"地区": """请从招标文件中提取项目所在地区。
|
||||
|
||||
搜索策略(按优先级):
|
||||
1. 直接查找:工程地点、建设地点、项目位置
|
||||
2. 从招标人名称提取
|
||||
3. 从信息来源提取
|
||||
4. 从项目名称提取
|
||||
|
||||
输出格式:市+区/县,如"金华市金东区"、"台州市椒江区"
|
||||
如果只能确定市级,返回市名。
|
||||
如果确实无法提取,请返回"文档未提及"。""",
|
||||
|
||||
"最高限价": """请从招标文件中提取价格信息,必须返回具体数字金额。
|
||||
|
||||
按优先级查找:
|
||||
1. 最高投标限价
|
||||
2. 招标控制价
|
||||
3. 最高限价、上限价
|
||||
4. 拨款控制价、控制价
|
||||
5. 合同估算价、预算金额
|
||||
|
||||
请直接返回金额,带上单位(万元或元)。
|
||||
示例:1234.56万元、2466285元
|
||||
|
||||
如果未提及任何价格信息,请返回"文档未提及"。""",
|
||||
|
||||
"最高投标限价": """请从招标文件中提取最高投标限价(或招标控制价)。
|
||||
|
||||
查找关键词:最高投标限价、招标控制价、最高限价、上限价、控制价、包干总价
|
||||
|
||||
请直接返回金额,带上单位(万元或元)。
|
||||
示例:1234.56万元、2466285元
|
||||
|
||||
如果未提及,请返回"文档未提及"。""",
|
||||
|
||||
"预估金额": """请从文档中提取项目预估金额。
|
||||
|
||||
查找关键词:预估金额、预计投资、估算金额、预算金额、项目总投资
|
||||
|
||||
请直接返回金额,带上单位(万元或元)。
|
||||
示例:1234.56万元、2466285元
|
||||
|
||||
如果未提及,请返回"文档未提及"。""",
|
||||
|
||||
"投标截止日": """请从招标文件中提取投标截止时间。
|
||||
|
||||
搜索关键词:投标截止时间、投标截止日、截止时间、开标时间、递交截止时间
|
||||
|
||||
重要规则:
|
||||
1. 绝对禁止推测或编造日期
|
||||
2. 如实提取文档中的原始日期
|
||||
3. 日期完整则返回标准格式 YYYY-MM-DD
|
||||
4. 日期不完整则返回原始格式
|
||||
5. 如果遇到日期范围(如"2026年3月1日至3月10日"),请提取最后一个日期作为投标截止日
|
||||
|
||||
如果未提及,请返回"文档未提及"。""",
|
||||
|
||||
"招标人": """请从招标文件中提取招标人信息。
|
||||
|
||||
查找关键词:招标人、招标单位、业主单位、建设单位
|
||||
|
||||
请直接返回招标人名称,不要其他解释。
|
||||
如果未提及,请返回"文档未提及"。""",
|
||||
|
||||
"造价付款方式": """请从招标文件的合同条款中提取付款方式信息。
|
||||
|
||||
查找章节:合同条款、通用条款、专用条款、付款方式、工程款支付
|
||||
|
||||
必须提取以下四项(只提取百分比数字):
|
||||
1. 预付款比例
|
||||
2. 进度款支付比例
|
||||
3. 结算款比例
|
||||
4. 质保金比例
|
||||
|
||||
输出格式:预付款XX%,进度款XX%,结算款XX%,质保金XX%
|
||||
|
||||
如果某项未提及用"无"代替。
|
||||
如果确实未找到付款相关内容,请返回"文档未提及"。""",
|
||||
|
||||
"招标时间": """请从文档中提取计划招标时间。
|
||||
|
||||
查找关键词:计划招标时间、预计招标时间、招标时间、计划开标时间、预计开标时间
|
||||
|
||||
请直接返回招标时间,不要其他解释。
|
||||
如果未找到,请返回"文档未提及"。""",
|
||||
}
|
||||
|
||||
# ============ 简道云配置 ============
|
||||
|
||||
JDY_CONFIG = {
|
||||
"api_key": "JmxuXmkew33mvQttRD3ftSfQoOEX6R9J",
|
||||
"forms": {
|
||||
"台州招标文件公示": {
|
||||
"app_id": "6965f35749afd00072b33c4a",
|
||||
"entry_id": "6965f35a962fab0113b87876",
|
||||
"field_mapping": {
|
||||
"项目发布时间": "_widget_1768289120174",
|
||||
"批准文号": "_widget_1768289120166",
|
||||
"名称": "_widget_1768289120167",
|
||||
"类型": "_widget_1768289120168",
|
||||
"投标截止日": "_widget_1768289120169",
|
||||
"预估金额": "_widget_1768289120170",
|
||||
"招标文件链接": "_widget_1768349415371",
|
||||
"招标阶段": "_widget_1768289432065",
|
||||
},
|
||||
},
|
||||
"台州招标计划": {
|
||||
"app_id": "6965f35749afd00072b33c4a",
|
||||
"entry_id": "6965f35a962fab0113b87876",
|
||||
"field_mapping": {
|
||||
"项目发布时间": "_widget_1768289120174",
|
||||
"批准文号": "_widget_1768289120166",
|
||||
"名称": "_widget_1768289120167",
|
||||
"类型": "_widget_1768289120168",
|
||||
"公告链接": "_widget_1768349415371",
|
||||
"招标阶段": "_widget_1768289432065",
|
||||
},
|
||||
},
|
||||
"浙江招标文件公示": {
|
||||
"app_id": "6965f35749afd00072b33c4a",
|
||||
"entry_id": "6965f50e955c9b638888e7d2",
|
||||
"field_mapping": {
|
||||
"发布时间": "_widget_1768289557651",
|
||||
"批准文号": "_widget_1768289557665",
|
||||
"名称": "_widget_1768349686082",
|
||||
"类型": "_widget_1768289557652",
|
||||
"地区": "_widget_1768289557653",
|
||||
"投标截止日": "_widget_1768289557654",
|
||||
"最高投标限价": "_widget_1768289557655",
|
||||
"最高限价": "_widget_1768289557655",
|
||||
"资质要求": "_widget_1768289557656",
|
||||
"业绩要求": "_widget_1768289557657",
|
||||
"评标办法": "_widget_1768289557658",
|
||||
"评分说明与资信评分标准": "_widget_1768289557659",
|
||||
"有无答辩": "_widget_1768289557660",
|
||||
"招标人": "_widget_1768289557661",
|
||||
"项目概况": "_widget_1768289557663",
|
||||
"造价付款方式": "_widget_1768289557664",
|
||||
"招标文件链接": "_widget_1768290058232",
|
||||
"招标阶段": "_widget_1768289909408",
|
||||
},
|
||||
},
|
||||
"浙江招标公告": {
|
||||
"app_id": "6965f35749afd00072b33c4a",
|
||||
"entry_id": "69703283d126285ded9ac1be",
|
||||
"field_mapping": {
|
||||
"发布时间": "_widget_1768289557651",
|
||||
"批准文号": "_widget_1768289557665",
|
||||
"名称": "_widget_1768349686082",
|
||||
"投标截止日": "_widget_1768289557654",
|
||||
"公告链接": "_widget_1768290058232",
|
||||
"招标阶段": "_widget_1768289909408",
|
||||
},
|
||||
},
|
||||
"浙江澄清修改": {
|
||||
"app_id": "6965f35749afd00072b33c4a",
|
||||
"entry_id": "697085af8e631aae04bb856c",
|
||||
"field_mapping": {
|
||||
"发布时间": "_widget_1768289557651",
|
||||
"批准文号": "_widget_1768289557665",
|
||||
"名称": "_widget_1768349686082",
|
||||
"澄清文件链接": "_widget_1768290058232",
|
||||
"招标阶段": "_widget_1768289909408",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
428
config_fixed.py
Normal file
428
config_fixed.py
Normal file
@@ -0,0 +1,428 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
爬虫配置文件
|
||||
"""
|
||||
|
||||
# 浙江省公共资源交易中心
|
||||
ZHEJIANG_CONFIG = {
|
||||
"name": "浙江省公共资源交易中心",
|
||||
"base_url": "https://ggzy.zj.gov.cn",
|
||||
"api_url": "https://ggzy.zj.gov.cn/inteligentsearch/rest/esinteligentsearch/getFullTextDataNew",
|
||||
# 交易领域代码
|
||||
"categories": {
|
||||
"工程建设": "002001",
|
||||
"政府采购": "002002",
|
||||
"土地使用权": "002003",
|
||||
"国有产权": "002004",
|
||||
"矿业权": "002006",
|
||||
"其他交易": "002007",
|
||||
},
|
||||
# 公告类型代码(工程建设)
|
||||
"notice_types": {
|
||||
"项目登记信息": "002001008",
|
||||
"招标计划": "002001013",
|
||||
"招标文件公示": "002001011",
|
||||
"招标公告": "002001001",
|
||||
"资格预审公告": "002001002",
|
||||
"澄清修改": "002001006",
|
||||
"资格预审结果": "002001007",
|
||||
"开标结果公示": "002001003",
|
||||
"中标候选人公示": "002001004",
|
||||
"中标结果公告": "002001005",
|
||||
"合同信息公开": "002001009",
|
||||
}
|
||||
}
|
||||
|
||||
# 台州公共资源交易中心
|
||||
TAIZHOU_CONFIG = {
|
||||
"name": "台州公共资源交易中心",
|
||||
"base_url": "https://ggzy.tzztb.zjtz.gov.cn",
|
||||
"api_url": "https://ggzy.tzztb.zjtz.gov.cn/rest/secaction/getSecInfoListYzm",
|
||||
"site_guid": "7eb5f7f1-9041-43ad-8e13-8fcb82ea831a",
|
||||
# 交易领域
|
||||
"categories": {
|
||||
"工程建设": "002001",
|
||||
"政府采购": "002002",
|
||||
"产权交易": "002003",
|
||||
"拓展资源": "002004",
|
||||
"土地交易": "002005",
|
||||
},
|
||||
# 公告类型(工程建设)
|
||||
"notice_types": {
|
||||
"招标计划公示": "002001014",
|
||||
"招标文件公示": "002001001",
|
||||
"招标公告": "002001002",
|
||||
"资格预审公告": "002001003",
|
||||
"中标候选人公示": "002001005",
|
||||
"中标结果公告": "002001006",
|
||||
"保证金退还公示": "002001008",
|
||||
}
|
||||
}
|
||||
|
||||
# 爬虫设置(云端部署安全参数)
|
||||
SPIDER_CONFIG = {
|
||||
"page_size": 20, # 每页数量
|
||||
"max_pages": 10, # 最大爬取页数
|
||||
# --- 延迟控制 ---
|
||||
"delay_min": 3, # 列表页最小延迟(秒)
|
||||
"delay_max": 6, # 列表页最大延迟(秒)
|
||||
"detail_delay_min": 2, # 详情页最小延迟(秒)
|
||||
"detail_delay_max": 5, # 详情页最大延迟(秒)
|
||||
# --- 安全阈值 ---
|
||||
"timeout": 30, # HTTP 超时时间(秒)
|
||||
"max_retries": 3, # 单次请求最大重试次数
|
||||
"max_consecutive_errors": 5, # 连续失败熔断阈值(降低=更早停止)
|
||||
"max_total_requests": 300, # 单次运行最大请求数
|
||||
"requests_per_minute": 10, # 每分钟最大请求数(超出自动减速)
|
||||
}
|
||||
|
||||
# 数据存储路径
|
||||
DATA_DIR = "data"
|
||||
|
||||
# ============ DeepSeek AI 处理配置 ============
|
||||
|
||||
DEEPSEEK_API_KEY = "sk-7b7211ee80b84000beebfa74a599ba13"
|
||||
|
||||
PROCESSING_CONFIG = {
|
||||
"temp_dir": "temp_files", # 临时文件目录
|
||||
"output_dir": "data", # AI处理结果输出目录
|
||||
"request_timeout": 90, # DeepSeek API 超时(秒)
|
||||
"max_content_length": 120000, # 发送给 DeepSeek 的最大内容长度
|
||||
"api_delay": 1, # API 调用间隔(秒)
|
||||
}
|
||||
|
||||
# 区域配置:按「站点+公告类型」定义需要AI提取的字段
|
||||
REGION_CONFIGS = {
|
||||
# key 格式: "site:notice_type"
|
||||
"zhejiang:招标文件公示": {
|
||||
"region_name": "浙江招标文件公示",
|
||||
"link_field": "招标文件链接", # 爬虫"链接"映射到的字段名
|
||||
"ai_fields": [
|
||||
"类型", "地区", "投标截止日", "最高投标限价", "最高限价",
|
||||
"资质要求", "业绩要求", "评标办法", "评分说明与资信评分标准",
|
||||
"有无答辩", "招标人", "项目概况", "造价付款方式", "批准文号",
|
||||
],
|
||||
},
|
||||
"zhejiang:招标公告": {
|
||||
"region_name": "浙江招标公告",
|
||||
"link_field": "公告链接",
|
||||
"ai_fields": ["批准文号", "投标截止日"],
|
||||
},
|
||||
"zhejiang:澄清修改": {
|
||||
"region_name": "浙江澄清修改",
|
||||
"link_field": "澄清文件链接",
|
||||
"ai_fields": ["批准文号"],
|
||||
},
|
||||
"taizhou:招标文件公示": {
|
||||
"region_name": "台州招标文件公示",
|
||||
"link_field": "招标文件链接",
|
||||
"ai_fields": [
|
||||
"类型", "地区", "批准文号", "投标截止日", "预估金额",
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
# DeepSeek 提示词模板
|
||||
DEEPSEEK_PROMPTS = {
|
||||
"批准文号": """请从招标公告中提取项目批准文号。
|
||||
|
||||
批准文号的常见格式:
|
||||
- 台建招备[2026]XXX号
|
||||
- 浙建计[2026]XXX号
|
||||
- 2302-XXXXXX-XX-XX-XXXXXX(项目代码格式)
|
||||
|
||||
查找关键词:批准文号、备案登记号、项目代码、项目编号、招标编号
|
||||
|
||||
请直接返回批准文号,不要其他解释。
|
||||
如果未找到,请返回"文档未提及"。""",
|
||||
|
||||
"资质要求": """从招标文件中提取企业资质等级。
|
||||
|
||||
搜索策略:
|
||||
1. 直接查找:资质要求、资质条件、资质等级、施工总承包、专业承包
|
||||
2. 查找章节:投标人须知前附表、招标公告、资格审查条件
|
||||
3. 特别注意:必须检查PDF附件中的内容,附件中通常包含详细的资质要求
|
||||
4. 如果写"见投标人须知前附表"或类似引用,请必须查找并提取前附表中的具体资质要求
|
||||
5. 如果前附表写"见招标公告",请在招标公告章节查找
|
||||
|
||||
重要:只返回资质类型和等级,不要任何其他内容!
|
||||
|
||||
正确格式示例:
|
||||
建筑工程施工总承包三级及以上
|
||||
市政公用工程施工总承包二级及以上
|
||||
|
||||
返回规则:
|
||||
- 找到具体资质等级 → 返回资质等级
|
||||
- 文档写"见招标公告"但招标公告在平台上 → 返回"详见招标公告"
|
||||
- 确实未找到任何相关信息 → 返回"文档未提及""",
|
||||
|
||||
|
||||
"业绩要求": """请从招标文件中提取投标人业绩要求。
|
||||
|
||||
搜索策略:
|
||||
1. 重点查找:投标人资格要求、业绩要求、投标人须知前附表、招标公告、评分标准
|
||||
2. 特别注意:必须检查PDF附件中的内容,附件中通常包含详细的业绩要求
|
||||
3. 关注关键词:业绩、工程经验、类似项目、同类工程、中标业绩、类似工程业绩
|
||||
4. 注意时间范围要求:近X年、自20XX年以来
|
||||
5. 特别注意:如果文档中提到"见投标人须知前附表"或类似引用,请查找并提取前附表中的业绩要求
|
||||
6. 如果业绩要求在评分标准中,请从评分标准中提取
|
||||
|
||||
必须提取的内容:
|
||||
- 业绩的时间范围要求
|
||||
- 业绩的具体要求(工程类型、规模、金额等)
|
||||
- 业绩数量要求
|
||||
- 项目负责人业绩要求(如有)
|
||||
|
||||
返回规则:
|
||||
- 找到具体业绩要求 → 返回业绩要求内容
|
||||
- 文档写"见招标公告"但招标公告在平台上 → 返回"详见招标公告"
|
||||
- 确实未找到 → 返回"文档未提及""",
|
||||
|
||||
|
||||
"评标办法": """请分析招标文件,判断采用的评标办法。
|
||||
|
||||
■ 必须检查以下内容:
|
||||
1. 投标人须知前附表中的勾选项(☑/□)
|
||||
2. "第三章 评标办法"的章节标题和具体内容
|
||||
3. 附件中的评标定标章节(如"评标定标办法"、"评标细则"等)
|
||||
4. 其他相关章节中关于评标方法的描述
|
||||
|
||||
■ 分析要点:
|
||||
- 仔细阅读附件中评标定标章节的详细内容
|
||||
- 关注评标方法的具体定义和操作流程
|
||||
- 确认是否采用评定分离方式
|
||||
- 区分综合评估法、经评审的最低投标价法等不同类型
|
||||
|
||||
■ 输出规则(评定分离优先):
|
||||
- 如果文档中出现"☑采用评定分离"或"评定分离方式招标"或附件中明确说明采用评定分离→ 返回"评定分离"
|
||||
- 综合评估法(含所有子类型)、资信商务评估法、合理低价法 → 返回"综合评估法"
|
||||
- 经评审的最低投标价法 → 返回"经评审的最低投标价法"
|
||||
|
||||
■ 只能返回以下值之一:
|
||||
1. 评定分离
|
||||
2. 综合评估法
|
||||
3. 经评审的最低投标价法
|
||||
4. 文档未提及
|
||||
|
||||
只返回上述值之一,不要任何其他文字。""",
|
||||
|
||||
"评分说明与资信评分标准": """请从招标文件中提取评分说明和资信评分标准。
|
||||
|
||||
■ 核心原则:
|
||||
- 只提取文档中明确存在的具体评分规则,严禁推测或编造
|
||||
- 如果只有章节标题但没有具体评分细则,必须返回"文档未提及"
|
||||
|
||||
■ 搜索策略:
|
||||
1. 全面查找:评分说明、评分标准、评分办法、评标细则、评标办法、评审办法
|
||||
2. 关注章节:第三章 评标办法、评标定标办法、评分标准、商务标评审、技术标评审、资信标评审
|
||||
3. 关键词扩展:评分、基准价、商务标、技术标、资信标、分值、得分、权重、评分办法、评标基准价、报价得分
|
||||
4. 特别关注:信用评价、信用等级、信用分、信誉分、诚信分、企业信用、项目负责人信用
|
||||
|
||||
■ 必须提取的内容:
|
||||
1. 总体评分结构(各部分分值分配)
|
||||
- 必须在最前面总结总分结构:总分 =资信标 X分+技术标 X分+商务标 X分
|
||||
- 确保分值总和为100分
|
||||
2. 基准价计算方法
|
||||
3. 信用分详细细则(包括企业和项目负责人):
|
||||
- 信用等级划分标准(如A/B/C/D/E级对应的具体分数范围,如110分以上(含110分)、105-110分(含105分)等)
|
||||
- 各等级对应的具体得分(如A级3分、B级2.5分等)
|
||||
- 未取得信用评价的得分
|
||||
- 特别关注项目负责人信用评价分的等级和分数要求
|
||||
- 必须提取完整的分数范围和对应分数,如:A类(110-120分2.8分、120-130分2.85分)
|
||||
- 必须提取完整的等级划分,如:A级:110分以上(含110分)、B级:105-110分(含105分)、C级:100-105分(含100分)、D级:90-100分(含90分)、E级:90分以下
|
||||
- 必须提取附件中的信用等级划分标准,如《台州市住房和城乡建设局关于公布建筑工程和市政公用工程企业信用等级划分标准的通知》中的等级划分
|
||||
- 必须提取具体的分数阈值,如110分以上(含110分)、105-110分(含105分)等
|
||||
|
||||
■ 示例输出:
|
||||
总分=资信标15分+技术标65分+商务标20分;评标基准价=最高限价×K值(K=80%-95%);商务标20分(报价得分采用线性插值法);技术标65分(打分制);资信标15分(其中企业信用分:A级110分以上3分,B级105-110分2.5分,C级100-105分2分,D级90-100分1.5分,E级90分以下1分,未取得0.5分;项目负责人信用分:A级110分以上2分,B级105-110分1.5分,C级100-105分1分,D级90-100分0.5分,E级90分以下0.3分,未取得0.1分)。
|
||||
|
||||
■ 信用分提取示例:
|
||||
投标人信用评价分:A类(110-120分2.8分、120-130分2.85分、130-140分2.9分、140-150分3分),B类(105-106分2.55分、106-107分2.6分、107-108分2.65分、108-109分2.7分、109-110分2.75分),C类2.3分,D类1.8分,E类1.3分,未取得0.8分。
|
||||
|
||||
■ 返回规则:
|
||||
- 找到具体评分规则 → 用简洁语言总结,信用分部分需详细列出
|
||||
- 文档中只有章节目录,无具体内容 → 返回"文档未提及"
|
||||
- 无法确定 → 返回"文档未提及"(严禁编造)""",
|
||||
|
||||
"有无答辩": """请判断招标文件中是否要求"现场答辩"或"现场面试"。
|
||||
关键词:答辩、面试、现场汇报、演示
|
||||
|
||||
如果明确要求答辩/面试,请返回"有";
|
||||
如果明确说明不需要,请返回"无";
|
||||
如果未提及,请返回"无"。""",
|
||||
|
||||
"项目概况": """请从招标文件中提取项目概况信息。
|
||||
|
||||
查找章节:项目概况、工程概况、建设规模、招标范围
|
||||
|
||||
必须提取:
|
||||
1. 建设地点
|
||||
2. 建设规模(长度×宽度、面积、层数等)
|
||||
3. 招标范围
|
||||
4. 计划工期
|
||||
5. 质量要求
|
||||
|
||||
请按以下格式输出:
|
||||
建设地点:XX;建设规模:XX;招标范围:XX;计划工期:≤XX日历天;质量要求:XX
|
||||
|
||||
如果未找到,请返回"文档未提及"。""",
|
||||
|
||||
"类型": """请根据项目信息判断项目类型。
|
||||
|
||||
只返回以下类型之一:
|
||||
施工类(需细分):总承包、市政、安装、装饰、公路、水利、电力
|
||||
其他类型:勘察、设计、监理、采购、咨询、其他
|
||||
|
||||
判断规则:
|
||||
1. 名称含"设计"→设计,"监理"→监理,"勘察/测量"→勘察,"EPC/总承包"→总承包
|
||||
2. 施工类细分:道路/桥梁/排水/管网→市政,公路/国道→公路,装修/幕墙→装饰,机电/电气→安装,房屋/学校/医院→总承包,水利/河道/水库→水利,电力/变电/输电→电力
|
||||
|
||||
只返回类型名称,不要其他解释。""",
|
||||
|
||||
"地区": """请从招标文件中提取项目所在地区。
|
||||
|
||||
搜索策略(按优先级):
|
||||
1. 直接查找:工程地点、建设地点、项目位置
|
||||
2. 从招标人名称提取
|
||||
3. 从信息来源提取
|
||||
4. 从项目名称提取
|
||||
|
||||
输出格式:市+区/县,如"金华市金东区"、"台州市椒江区"
|
||||
如果只能确定市级,返回市名。
|
||||
如果确实无法提取,请返回"文档未提及"。""",
|
||||
|
||||
"最高限价": """请从招标文件中提取价格信息,必须返回具体数字金额。
|
||||
|
||||
按优先级查找:
|
||||
1. 最高投标限价
|
||||
2. 招标控制价
|
||||
3. 最高限价、上限价
|
||||
4. 拨款控制价、控制价
|
||||
5. 合同估算价、预算金额
|
||||
|
||||
请直接返回金额,带上单位(万元或元)。
|
||||
示例:1234.56万元、2466285元
|
||||
|
||||
如果未提及任何价格信息,请返回"文档未提及"。""",
|
||||
|
||||
"最高投标限价": """请从招标文件中提取最高投标限价(或招标控制价)。
|
||||
|
||||
查找关键词:最高投标限价、招标控制价、最高限价、上限价、控制价、包干总价
|
||||
|
||||
请直接返回金额,带上单位(万元或元)。
|
||||
示例:1234.56万元、2466285元
|
||||
|
||||
如果未提及,请返回"文档未提及"。""",
|
||||
|
||||
"预估金额": """请从文档中提取项目预估金额。
|
||||
|
||||
查找关键词:预估金额、预计投资、估算金额、预算金额、项目总投资
|
||||
|
||||
请直接返回金额,带上单位(万元或元)。
|
||||
示例:1234.56万元、2466285元
|
||||
|
||||
如果未提及,请返回"文档未提及"。""",
|
||||
|
||||
"投标截止日": """请从招标文件中提取投标截止时间。
|
||||
|
||||
搜索关键词:投标截止时间、投标截止日、截止时间、开标时间、递交截止时间
|
||||
|
||||
重要规则:
|
||||
1. 绝对禁止推测或编造日期
|
||||
2. 如实提取文档中的原始日期
|
||||
3. 日期完整则返回标准格式 YYYY-MM-DD
|
||||
4. 日期不完整则返回原始格式
|
||||
|
||||
如果未提及,请返回"文档未提及"。""",
|
||||
|
||||
"招标人": """请从招标文件中提取招标人信息。
|
||||
|
||||
查找关键词:招标人、招标单位、业主单位、建设单位
|
||||
|
||||
请直接返回招标人名称,不要其他解释。
|
||||
如果未提及,请返回"文档未提及"。""",
|
||||
|
||||
"造价付款方式": """请从招标文件的合同条款中提取付款方式信息。
|
||||
|
||||
查找章节:合同条款、通用条款、专用条款、付款方式、工程款支付
|
||||
|
||||
必须提取以下四项(只提取百分比数字):
|
||||
1. 预付款比例
|
||||
2. 进度款支付比例
|
||||
3. 结算款比例
|
||||
4. 质保金比例
|
||||
|
||||
输出格式:预付款XX%,进度款XX%,结算款XX%,质保金XX%
|
||||
|
||||
如果某项未提及用"无"代替。
|
||||
如果确实未找到付款相关内容,请返回"文档未提及"。""",
|
||||
}
|
||||
|
||||
# ============ 简道云配置 ============
|
||||
|
||||
JDY_CONFIG = {
|
||||
"api_key": "JmxuXmkew33mvQttRD3ftSfQoOEX6R9J",
|
||||
"forms": {
|
||||
"台州招标文件公示": {
|
||||
"app_id": "6965f35749afd00072b33c4a",
|
||||
"entry_id": "6965f35a962fab0113b87876",
|
||||
"field_mapping": {
|
||||
"项目发布时间": "_widget_1768289120174",
|
||||
"批准文号": "_widget_1768289120166",
|
||||
"名称": "_widget_1768289120167",
|
||||
"类型": "_widget_1768289120168",
|
||||
"投标截止日": "_widget_1768289120169",
|
||||
"预估金额": "_widget_1768289120170",
|
||||
"招标文件链接": "_widget_1768349415371",
|
||||
"招标阶段": "_widget_1768289432065",
|
||||
},
|
||||
},
|
||||
"浙江招标文件公示": {
|
||||
"app_id": "6965f35749afd00072b33c4a",
|
||||
"entry_id": "6965f50e955c9b638888e7d2",
|
||||
"field_mapping": {
|
||||
"发布时间": "_widget_1768289557651",
|
||||
"批准文号": "_widget_1768289557665",
|
||||
"名称": "_widget_1768349686082",
|
||||
"类型": "_widget_1768289557652",
|
||||
"地区": "_widget_1768289557653",
|
||||
"投标截止日": "_widget_1768289557654",
|
||||
"最高投标限价": "_widget_1768289557655",
|
||||
"最高限价": "_widget_1768289557655",
|
||||
"资质要求": "_widget_1768289557656",
|
||||
"业绩要求": "_widget_1768289557657",
|
||||
"评标办法": "_widget_1768289557658",
|
||||
"评分说明与资信评分标准": "_widget_1768289557659",
|
||||
"有无答辩": "_widget_1768289557660",
|
||||
"招标人": "_widget_1768289557661",
|
||||
"项目概况": "_widget_1768289557663",
|
||||
"造价付款方式": "_widget_1768289557664",
|
||||
"招标文件链接": "_widget_1768290058232",
|
||||
"招标阶段": "_widget_1768289909408",
|
||||
},
|
||||
},
|
||||
"浙江招标公告": {
|
||||
"app_id": "6965f35749afd00072b33c4a",
|
||||
"entry_id": "69703283d126285ded9ac1be",
|
||||
"field_mapping": {
|
||||
"发布时间": "_widget_1768289557651",
|
||||
"批准文号": "_widget_1768289557665",
|
||||
"名称": "_widget_1768349686082",
|
||||
"投标截止日": "_widget_1768289557654",
|
||||
"公告链接": "_widget_1768290058232",
|
||||
"招标阶段": "_widget_1768289909408",
|
||||
},
|
||||
},
|
||||
"浙江澄清修改": {
|
||||
"app_id": "6965f35749afd00072b33c4a",
|
||||
"entry_id": "697085af8e631aae04bb856c",
|
||||
"field_mapping": {
|
||||
"发布时间": "_widget_1768289557651",
|
||||
"批准文号": "_widget_1768289557665",
|
||||
"名称": "_widget_1768349686082",
|
||||
"澄清文件链接": "_widget_1768290058232",
|
||||
"招标阶段": "_widget_1768289909408",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
153
main.py
Normal file
153
main.py
Normal file
@@ -0,0 +1,153 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
公共资源交易中心爬虫 - 主程序
|
||||
支持:浙江省、台州市
|
||||
可选:DeepSeek AI 处理 + 简道云上传
|
||||
"""
|
||||
import argparse
|
||||
import logging
|
||||
from config import ZHEJIANG_CONFIG, TAIZHOU_CONFIG, SPIDER_CONFIG, DATA_DIR
|
||||
from spiders import ZhejiangSpider, TaizhouSpider
|
||||
from spiders.base import setup_logging
|
||||
|
||||
logger = logging.getLogger("ztb")
|
||||
|
||||
|
||||
def crawl_zhejiang(max_pages=5, category=None, notice_type=None,
|
||||
date_filter=None, download_attachment=False):
|
||||
"""爬取浙江省公共资源交易中心"""
|
||||
spider = ZhejiangSpider(ZHEJIANG_CONFIG, SPIDER_CONFIG, DATA_DIR)
|
||||
spider.crawl(max_pages=max_pages, category=category, notice_type=notice_type,
|
||||
date_filter=date_filter, download_attachment=download_attachment)
|
||||
spider.save_to_csv()
|
||||
return spider.results
|
||||
|
||||
|
||||
def crawl_taizhou(max_pages=5, category=None, notice_type=None,
|
||||
date_filter=None, download_attachment=False):
|
||||
"""爬取台州公共资源交易中心"""
|
||||
spider = TaizhouSpider(TAIZHOU_CONFIG, SPIDER_CONFIG, DATA_DIR)
|
||||
spider.crawl(max_pages=max_pages, category=category, notice_type=notice_type,
|
||||
date_filter=date_filter, download_attachment=download_attachment)
|
||||
spider.save_to_csv()
|
||||
return spider.results
|
||||
|
||||
|
||||
def crawl_all(max_pages=5, category=None, notice_type=None,
|
||||
date_filter=None, download_attachment=False):
|
||||
"""爬取所有网站"""
|
||||
all_results = []
|
||||
|
||||
logger.info("=" * 40)
|
||||
results = crawl_zhejiang(max_pages, category, notice_type,
|
||||
date_filter, download_attachment)
|
||||
all_results.extend(results)
|
||||
|
||||
logger.info("=" * 40)
|
||||
results = crawl_taizhou(max_pages, category, notice_type,
|
||||
date_filter, download_attachment)
|
||||
all_results.extend(results)
|
||||
|
||||
logger.info(f"全部爬取完成,共 {len(all_results)} 条数据")
|
||||
return all_results
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description='公共资源交易中心爬虫')
|
||||
parser.add_argument(
|
||||
'-s', '--site',
|
||||
choices=['zhejiang', 'taizhou', 'all'],
|
||||
default='zhejiang',
|
||||
help='选择爬取的网站 (默认: zhejiang)'
|
||||
)
|
||||
parser.add_argument(
|
||||
'-p', '--pages',
|
||||
type=int,
|
||||
default=None,
|
||||
help='爬取页数 (默认: 5, 指定日期时默认100)'
|
||||
)
|
||||
parser.add_argument(
|
||||
'-c', '--category',
|
||||
default=None,
|
||||
help='交易领域 (如: 工程建设, 政府采购)'
|
||||
)
|
||||
parser.add_argument(
|
||||
'-t', '--type',
|
||||
default=None,
|
||||
help='公告类型 (如: 招标公告, 招标文件公示)'
|
||||
)
|
||||
parser.add_argument(
|
||||
'-d', '--date',
|
||||
default=None,
|
||||
help='日期过滤 (yesterday 或 2026-02-03)'
|
||||
)
|
||||
parser.add_argument(
|
||||
'-a', '--attachment',
|
||||
action='store_true',
|
||||
help='下载并解析附件'
|
||||
)
|
||||
parser.add_argument(
|
||||
'-P', '--process',
|
||||
action='store_true',
|
||||
help='启用 DeepSeek AI 处理(提取结构化字段)'
|
||||
)
|
||||
parser.add_argument(
|
||||
'-U', '--upload',
|
||||
action='store_true',
|
||||
help='上传处理结果到简道云(需配合 -P 使用)'
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
setup_logging()
|
||||
|
||||
# 页数:指定日期时自动放大,确保抓完全部数据
|
||||
max_pages = args.pages
|
||||
if max_pages is None:
|
||||
max_pages = 100 if args.date else 5
|
||||
|
||||
# 爬取
|
||||
results = []
|
||||
# 为台州招标计划公示设置默认的工程建设类别
|
||||
if args.site == 'taizhou' and args.type == '招标计划公示' and not args.category:
|
||||
args.category = '工程建设'
|
||||
logger.info("为台州招标计划公示自动设置类别: 工程建设")
|
||||
|
||||
if args.site == 'zhejiang':
|
||||
results = crawl_zhejiang(
|
||||
max_pages, args.category, args.type, args.date, args.attachment)
|
||||
elif args.site == 'taizhou':
|
||||
results = crawl_taizhou(
|
||||
max_pages, args.category, args.type, args.date, args.attachment)
|
||||
elif args.site == 'all':
|
||||
results = crawl_all(
|
||||
max_pages, args.category, args.type, args.date, args.attachment)
|
||||
|
||||
# AI 处理
|
||||
if args.process and results and args.type:
|
||||
from processors import ProcessingPipeline
|
||||
pipeline = ProcessingPipeline()
|
||||
if args.site == 'all':
|
||||
# 按站点分组处理
|
||||
source_to_site = {
|
||||
ZHEJIANG_CONFIG['name']: 'zhejiang',
|
||||
TAIZHOU_CONFIG['name']: 'taizhou',
|
||||
}
|
||||
for source, site_name in source_to_site.items():
|
||||
site_results = [
|
||||
r for r in results if r.get('来源') == source]
|
||||
if site_results:
|
||||
pipeline.process_results(
|
||||
site_results, site=site_name,
|
||||
notice_type=args.type, upload=args.upload,
|
||||
)
|
||||
else:
|
||||
pipeline.process_results(
|
||||
results, site=args.site,
|
||||
notice_type=args.type, upload=args.upload,
|
||||
)
|
||||
elif args.process and not args.type:
|
||||
logger.warning("启用AI处理时需指定公告类型 (-t),已跳过")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
68
process_csv.py
Normal file
68
process_csv.py
Normal file
@@ -0,0 +1,68 @@
|
||||
import csv
|
||||
import re
|
||||
|
||||
# 读取CSV文件
|
||||
with open('data/浙江省公共资源交易中心_20260213_172414.csv', 'r', encoding='utf-8') as file:
|
||||
reader = csv.reader(file)
|
||||
headers = next(reader) # 读取表头
|
||||
rows = list(reader)[:20] # 读取前20条数据
|
||||
|
||||
# 打印表头
|
||||
print('\n原始表头:')
|
||||
for i, header in enumerate(headers):
|
||||
print(f'{i+1}. {header}')
|
||||
|
||||
# 分析前20条数据
|
||||
print('\n前20条数据分析:')
|
||||
print('-' * 100)
|
||||
print(f'| {"序号":<4} | {"标题":<80} | {"项目批准文号":<30} | {"项目名称":<80} |')
|
||||
print('-' * 100)
|
||||
|
||||
for i, row in enumerate(rows):
|
||||
title = row[0]
|
||||
project_id = row[6]
|
||||
project_name = row[7]
|
||||
|
||||
# 从标题中提取批准文号(如果有的话)
|
||||
id_match = re.search(r'\[(.*?)\]$', title)
|
||||
extracted_id = id_match.group(1) if id_match else ''
|
||||
|
||||
# 从标题中提取纯项目名称
|
||||
extracted_name = re.sub(r'\[(.*?)\]$', '', title).strip()
|
||||
|
||||
# 验证项目批准文号是否一致
|
||||
id_match_flag = project_id == extracted_id
|
||||
|
||||
# 验证项目名称是否正确
|
||||
name_match_flag = project_name == extracted_name
|
||||
|
||||
print(f'| {i+1:<4} | {title} | {project_id} | {project_name} |')
|
||||
|
||||
# 如果有不一致,打印详细信息
|
||||
if not id_match_flag:
|
||||
print(f' 警告: 项目批准文号不一致 - 标题中提取: {extracted_id}, 列中值: {project_id}')
|
||||
if not name_match_flag:
|
||||
print(f' 警告: 项目名称不一致 - 标题中提取: {extracted_name}, 列中值: {project_name}')
|
||||
|
||||
print('-' * 100)
|
||||
|
||||
# 检查是否所有项目名称都不包含批准文号
|
||||
print('\n项目名称列检查:')
|
||||
print('-' * 100)
|
||||
print(f'| {"序号":<4} | {"项目名称":<80} | {"是否包含批准文号":<15} |')
|
||||
print('-' * 100)
|
||||
|
||||
for i, row in enumerate(rows):
|
||||
project_name = row[7]
|
||||
has_id = bool(re.search(r'\[.*?\]$', project_name))
|
||||
print(f'| {i+1:<4} | {project_name} | {"是" if has_id else "否":<15} |')
|
||||
|
||||
print('-' * 100)
|
||||
|
||||
# 总结
|
||||
print('\n总结:')
|
||||
print('1. 从CSV文件中可以看到,项目批准文号和项目名称已经正确分离到不同列中')
|
||||
print('2. 标题列包含完整信息:项目名称[项目批准文号]')
|
||||
print('3. 项目批准文号列(第7列)只包含批准文号')
|
||||
print('4. 项目名称列(第8列)只包含纯项目名称,不包含批准文号')
|
||||
print('5. 前3条数据的项目名称和项目批准文号分离正确')
|
||||
4
processors/__init__.py
Normal file
4
processors/__init__.py
Normal file
@@ -0,0 +1,4 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from .pipeline import ProcessingPipeline
|
||||
|
||||
__all__ = ['ProcessingPipeline']
|
||||
293
processors/content_fetcher.py
Normal file
293
processors/content_fetcher.py
Normal file
@@ -0,0 +1,293 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
内容获取器 - 获取详情页文本 + 附件内容
|
||||
"""
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import time
|
||||
|
||||
import requests
|
||||
import urllib3
|
||||
import pdfplumber
|
||||
from bs4 import BeautifulSoup
|
||||
from docx import Document
|
||||
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
logger = logging.getLogger("ztb")
|
||||
|
||||
|
||||
class ContentFetcher:
|
||||
"""页面内容 + 附件获取器"""
|
||||
|
||||
# 速率控制参数
|
||||
RPM_LIMIT = 12 # 每分钟最大请求数
|
||||
DELAY_MIN = 1.5 # 请求间最小延迟(秒)
|
||||
DELAY_MAX = 3.0 # 请求间最大延迟(秒)
|
||||
MAX_DOWNLOAD_MB = 50 # 单个附件最大体积(MB)
|
||||
|
||||
def __init__(self, temp_dir: str = "temp_files"):
|
||||
self.temp_dir = temp_dir
|
||||
os.makedirs(temp_dir, exist_ok=True)
|
||||
self.headers = {
|
||||
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) "
|
||||
"Chrome/120.0.0.0 Safari/537.36",
|
||||
}
|
||||
self._req_timestamps = [] # 用于 RPM 限速
|
||||
|
||||
# ---------- 公开方法 ----------
|
||||
|
||||
def get_full_content(self, url: str, max_attachments: int = 2) -> str:
|
||||
"""
|
||||
获取页面文本 + 附件解析文本,合并返回(单次请求)
|
||||
|
||||
Args:
|
||||
url: 详情页 URL
|
||||
max_attachments: 最多处理的附件数
|
||||
|
||||
Returns:
|
||||
合并后的全文文本
|
||||
"""
|
||||
# 1. 获取页面 HTML(单次请求)
|
||||
html = self._fetch_html(url)
|
||||
if not html:
|
||||
return ""
|
||||
|
||||
# 2. 提取页面纯文本
|
||||
soup = BeautifulSoup(html, "html.parser")
|
||||
page_content = soup.get_text(separator="\n", strip=True)
|
||||
|
||||
# 3. 提取发布时间
|
||||
publish_time = self._extract_publish_time(soup, page_content)
|
||||
if publish_time:
|
||||
page_content = f"发布时间: {publish_time}\n\n" + page_content
|
||||
|
||||
# 4. 从同一 HTML 查找并解析附件
|
||||
attachments = self._find_attachments(soup, url)
|
||||
attachment_content = ""
|
||||
for att in attachments[:max_attachments]:
|
||||
att_text = self._download_and_parse(att["url"], att["name"])
|
||||
if att_text:
|
||||
attachment_content += f"\n\n=== 附件: {att['name']} ===\n{att_text}"
|
||||
|
||||
full_content = page_content
|
||||
if attachment_content:
|
||||
full_content += attachment_content
|
||||
|
||||
return full_content
|
||||
|
||||
@staticmethod
|
||||
def _extract_publish_time(soup: BeautifulSoup, page_content: str) -> str:
|
||||
"""
|
||||
从页面中提取发布时间
|
||||
|
||||
Args:
|
||||
soup: BeautifulSoup 对象
|
||||
page_content: 页面纯文本
|
||||
|
||||
Returns:
|
||||
发布时间字符串,如 "2026-02-13 16:12:28"
|
||||
"""
|
||||
# 1. 尝试从页面文本中提取
|
||||
patterns = [
|
||||
r'信息发布时间[::]\s*([\d-]+\s[\d:]+)',
|
||||
r'发布时间[::]\s*([\d-]+\s[\d:]+)',
|
||||
r'发布日期[::]\s*([\d-]+\s[\d:]+)',
|
||||
r'发布时间[::]\s*([\d-]+)',
|
||||
r'发布日期[::]\s*([\d-]+)',
|
||||
]
|
||||
|
||||
for pattern in patterns:
|
||||
match = re.search(pattern, page_content)
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
|
||||
# 2. 尝试从HTML标签中提取
|
||||
time_tags = soup.find_all(['time', 'span', 'div'], class_=re.compile(r'time|date|publish', re.I))
|
||||
for tag in time_tags:
|
||||
text = tag.get_text(strip=True)
|
||||
match = re.search(r'([\d-]+\s[\d:]+)', text)
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
|
||||
return ""
|
||||
|
||||
# ---------- 速率控制 ----------
|
||||
|
||||
def _throttle(self):
|
||||
"""请求前限速:RPM 上限 + 随机延迟"""
|
||||
now = time.time()
|
||||
self._req_timestamps = [
|
||||
t for t in self._req_timestamps if now - t < 60]
|
||||
if len(self._req_timestamps) >= self.RPM_LIMIT:
|
||||
wait = 60 - (now - self._req_timestamps[0]) + random.uniform(1, 3)
|
||||
if wait > 0:
|
||||
logger.debug(f"ContentFetcher 限速等待 {wait:.0f}s")
|
||||
time.sleep(wait)
|
||||
self._req_timestamps.append(time.time())
|
||||
time.sleep(random.uniform(self.DELAY_MIN, self.DELAY_MAX))
|
||||
|
||||
# ---------- 页面获取 ----------
|
||||
|
||||
def _fetch_html(self, url: str, max_retries: int = 3) -> str:
|
||||
"""获取页面 HTML 原文"""
|
||||
self._throttle()
|
||||
for retry in range(max_retries):
|
||||
try:
|
||||
resp = requests.get(url, headers=self.headers,
|
||||
timeout=45, verify=False)
|
||||
resp.encoding = "utf-8"
|
||||
if resp.status_code != 200:
|
||||
logger.warning(f"页面返回 {resp.status_code}: {url[:60]}")
|
||||
if retry < max_retries - 1:
|
||||
time.sleep(3)
|
||||
continue
|
||||
return ""
|
||||
|
||||
logger.debug(f"页面获取成功 {len(resp.text)} 字符: {url[:60]}")
|
||||
return resp.text
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"获取页面失败 ({retry+1}/{max_retries}): {e}")
|
||||
if retry < max_retries - 1:
|
||||
time.sleep(3)
|
||||
return ""
|
||||
|
||||
# ---------- 附件发现 ----------
|
||||
|
||||
@staticmethod
|
||||
def _find_attachments(soup: BeautifulSoup, base_url: str) -> list:
|
||||
"""从已解析的 HTML 中查找附件链接"""
|
||||
attachments = []
|
||||
for link in soup.find_all("a"):
|
||||
href = link.get("href", "")
|
||||
text = link.get_text(strip=True)
|
||||
if any(ext in href.lower() for ext in [".pdf", ".doc", ".docx"]):
|
||||
if not href.startswith("http"):
|
||||
if href.startswith("/"):
|
||||
base = "/".join(base_url.split("/")[:3])
|
||||
href = base + href
|
||||
else:
|
||||
href = base_url.rsplit("/", 1)[0] + "/" + href
|
||||
attachments.append({
|
||||
"name": text or href.split("/")[-1],
|
||||
"url": href,
|
||||
})
|
||||
return attachments
|
||||
|
||||
# ---------- 附件下载与解析 ----------
|
||||
|
||||
def _download_and_parse(self, url: str, filename: str,
|
||||
max_retries: int = 3) -> str:
|
||||
"""下载附件并解析为文本"""
|
||||
self._throttle()
|
||||
file_type = self._get_file_type(url)
|
||||
max_bytes = self.MAX_DOWNLOAD_MB * 1024 * 1024
|
||||
for retry in range(max_retries):
|
||||
try:
|
||||
logger.debug(f"下载附件: {filename}")
|
||||
resp = requests.get(url, headers=self.headers,
|
||||
timeout=90, verify=False, stream=True)
|
||||
resp.raise_for_status()
|
||||
|
||||
temp_path = os.path.join(
|
||||
self.temp_dir, f"temp_{hash(url)}.{file_type}")
|
||||
total = 0
|
||||
with open(temp_path, "wb") as f:
|
||||
for chunk in resp.iter_content(chunk_size=8192):
|
||||
if chunk:
|
||||
f.write(chunk)
|
||||
total += len(chunk)
|
||||
if total > max_bytes:
|
||||
logger.warning(
|
||||
f"附件超过 {self.MAX_DOWNLOAD_MB}MB 限制,跳过: {filename}")
|
||||
break
|
||||
|
||||
if total > max_bytes:
|
||||
try:
|
||||
os.remove(temp_path)
|
||||
except OSError:
|
||||
pass
|
||||
return ""
|
||||
|
||||
logger.debug(f"附件已下载 {total/1024:.1f}KB: {filename}")
|
||||
|
||||
try:
|
||||
if file_type == "pdf":
|
||||
return self._parse_pdf(temp_path)
|
||||
elif file_type in ("doc", "docx"):
|
||||
return self._parse_word(temp_path)
|
||||
return ""
|
||||
finally:
|
||||
try:
|
||||
os.remove(temp_path)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"附件处理失败 ({retry+1}/{max_retries}): {e}")
|
||||
if retry < max_retries - 1:
|
||||
time.sleep(4)
|
||||
return ""
|
||||
|
||||
# ---------- 文件解析 ----------
|
||||
|
||||
@staticmethod
|
||||
def _parse_pdf(file_path: str) -> str:
|
||||
"""解析 PDF 文件"""
|
||||
try:
|
||||
text = ""
|
||||
with pdfplumber.open(file_path) as pdf:
|
||||
for page in pdf.pages:
|
||||
page_text = page.extract_text()
|
||||
if page_text:
|
||||
text += page_text + "\n"
|
||||
return text
|
||||
except Exception as e:
|
||||
logger.warning(f"PDF解析失败: {e}")
|
||||
return ""
|
||||
|
||||
@staticmethod
|
||||
def _parse_word(file_path: str) -> str:
|
||||
"""解析 Word 文件(支持 .doc 和 .docx)"""
|
||||
# 尝试 python-docx (适用于 .docx)
|
||||
try:
|
||||
doc = Document(file_path)
|
||||
text = "\n".join(p.text for p in doc.paragraphs)
|
||||
if len(text) > 500:
|
||||
return text
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 回退: UTF-16LE 解码 (适用于 .doc)
|
||||
try:
|
||||
with open(file_path, "rb") as f:
|
||||
content = f.read()
|
||||
raw = content.decode("utf-16le", errors="ignore")
|
||||
readable = []
|
||||
for c in raw:
|
||||
if "\u4e00" <= c <= "\u9fff" or c in ",。;:""''()《》【】、0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz%.=×+- \n□☑":
|
||||
readable.append(c)
|
||||
elif readable and readable[-1] != " ":
|
||||
readable.append(" ")
|
||||
text = re.sub(r" +", " ", "".join(readable))
|
||||
if len(text) > 500:
|
||||
return text
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return ""
|
||||
|
||||
@staticmethod
|
||||
def _get_file_type(filename: str) -> str:
|
||||
"""根据文件名/URL 判断文件类型"""
|
||||
low = filename.lower()
|
||||
if ".pdf" in low:
|
||||
return "pdf"
|
||||
if ".docx" in low:
|
||||
return "docx"
|
||||
if ".doc" in low:
|
||||
return "doc"
|
||||
return "unknown"
|
||||
343
processors/deepseek.py
Normal file
343
processors/deepseek.py
Normal file
@@ -0,0 +1,343 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
DeepSeek AI 处理器 - 从招标文件内容中提取结构化字段
|
||||
"""
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
import urllib3
|
||||
|
||||
import requests
|
||||
|
||||
from config import DEEPSEEK_API_KEY, DEEPSEEK_PROMPTS, PROCESSING_CONFIG
|
||||
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
logger = logging.getLogger("ztb")
|
||||
|
||||
|
||||
class DeepSeekProcessor:
|
||||
"""DeepSeek AI 字段提取器"""
|
||||
|
||||
def __init__(self, api_key: str = None):
|
||||
self.api_key = api_key or DEEPSEEK_API_KEY
|
||||
self.api_url = "https://api.deepseek.com/chat/completions"
|
||||
self.model = "deepseek-chat"
|
||||
self.timeout = PROCESSING_CONFIG.get("request_timeout", 90)
|
||||
self.max_content = PROCESSING_CONFIG.get("max_content_length", 120000)
|
||||
|
||||
def extract_fields(self, content: str, fields: list,
|
||||
region_name: str = "") -> dict:
|
||||
"""
|
||||
使用 DeepSeek 提取指定字段
|
||||
|
||||
Args:
|
||||
content: 页面+附件合并后的文本
|
||||
fields: 需要提取的字段列表
|
||||
region_name: 区域名称(用于日志)
|
||||
|
||||
Returns:
|
||||
{字段名: 提取值} 字典
|
||||
"""
|
||||
if not content or not fields:
|
||||
return {}
|
||||
|
||||
# 构建字段提示词
|
||||
field_prompts = []
|
||||
for field in fields:
|
||||
if field in DEEPSEEK_PROMPTS:
|
||||
field_prompts.append(f"【{field}】\n{DEEPSEEK_PROMPTS[field]}")
|
||||
else:
|
||||
field_prompts.append(
|
||||
f'【{field}】请从文档中提取{field}信息。如果未找到,返回"文档未提及"。')
|
||||
|
||||
# 内容截取
|
||||
selected_content = self._prepare_content(content, fields)
|
||||
|
||||
# 构建消息
|
||||
system_prompt = (
|
||||
"你是一个专业的招标文件分析助手,擅长从招标文件中准确提取关键信息。"
|
||||
"请特别注意:1) 仔细检查PDF附件内容 2) 识别不同表述的同一概念 "
|
||||
"3) 提取详细完整的信息 4) 严格按照JSON格式返回结果。"
|
||||
)
|
||||
|
||||
prompt = f"""请从以下招标文件内容中提取指定字段信息。
|
||||
|
||||
提取规则:
|
||||
1. 只提取文档中明确存在的信息,严禁推测或编造
|
||||
2. 如果某字段在文档中未提及,必须返回"文档未提及"
|
||||
3. 对于价格信息,确保提取完整的价格数值和单位
|
||||
4. 评标办法和评分说明必须来自文档正文而非目录页
|
||||
|
||||
需要提取的字段:
|
||||
{chr(10).join(field_prompts)}
|
||||
|
||||
请以JSON格式返回结果:
|
||||
{{
|
||||
"字段名1": "提取的内容",
|
||||
"字段名2": "提取的内容"
|
||||
}}
|
||||
|
||||
招标文件内容:
|
||||
{selected_content}
|
||||
"""
|
||||
|
||||
try:
|
||||
response = requests.post(
|
||||
self.api_url,
|
||||
headers={
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
json={
|
||||
"model": self.model,
|
||||
"messages": [
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": prompt},
|
||||
],
|
||||
"temperature": 0.1,
|
||||
"max_tokens": 3000,
|
||||
"top_p": 0.95,
|
||||
},
|
||||
timeout=self.timeout,
|
||||
verify=False,
|
||||
)
|
||||
response.raise_for_status()
|
||||
result = response.json()
|
||||
|
||||
# 解析返回 JSON
|
||||
content_text = result["choices"][0]["message"]["content"]
|
||||
extracted = self._parse_json_response(content_text)
|
||||
|
||||
# 后处理:价格同步、格式清理
|
||||
extracted = self._post_process(extracted, fields, content)
|
||||
|
||||
return extracted
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
logger.warning(f"DeepSeek 返回 JSON 解析失败: {e}")
|
||||
return self._local_extract(content, fields)
|
||||
except requests.RequestException as e:
|
||||
logger.warning(f"DeepSeek API 请求失败: {e}")
|
||||
return self._local_extract(content, fields)
|
||||
except Exception as e:
|
||||
logger.warning(f"DeepSeek 处理异常: {e}")
|
||||
return self._local_extract(content, fields)
|
||||
|
||||
# ---------- 内容预处理 ----------
|
||||
|
||||
def _prepare_content(self, content: str, fields: list) -> str:
|
||||
"""根据字段类型智能截取内容"""
|
||||
if len(content) <= self.max_content:
|
||||
return content
|
||||
|
||||
logger.debug(f"内容过长({len(content)}字符),使用预筛选")
|
||||
# 提取文档头部
|
||||
header = content[:10000]
|
||||
contexts = []
|
||||
|
||||
# 按字段类型定义搜索关键词
|
||||
keyword_map = {
|
||||
"价格": (["最高限价", "最高投标限价", "预估金额", "预估合同金额"],
|
||||
["最高投标限价", "招标控制价", "最高限价", "控制价", "限价",
|
||||
"投标须知", "万元"]),
|
||||
"评标": (["评标办法", "评分说明与资信评分标准"],
|
||||
["评标办法", "评分标准", "资信标", "技术标", "商务标",
|
||||
"综合评估法", "评定分离"]),
|
||||
"资质": (["资质要求", "业绩要求"],
|
||||
["资质要求", "资格要求", "施工总承包", "资质等级",
|
||||
"业绩要求", "业绩条件"]),
|
||||
"日期": (["投标截止日"],
|
||||
["投标截止", "截止时间", "开标时间", "递交截止"]),
|
||||
"付款": (["造价付款方式"],
|
||||
["付款方式", "工程款支付", "预付款", "进度款",
|
||||
"结算款", "质保金", "合同条款"]),
|
||||
}
|
||||
|
||||
for group, (target_fields, keywords) in keyword_map.items():
|
||||
if any(f in fields for f in target_fields):
|
||||
window = 800 if group in ("评标", "付款") else 500
|
||||
for kw in keywords:
|
||||
for m in re.finditer(
|
||||
r'.{0,' + str(window) + '}' + re.escape(kw) +
|
||||
r'.{0,' + str(window) + '}', content, re.DOTALL
|
||||
):
|
||||
contexts.append(m.group(0))
|
||||
|
||||
# 特别提取投标人须知前附表
|
||||
if "业绩要求" in fields or "资质要求" in fields:
|
||||
if "投标人须知前附表" in content:
|
||||
start_idx = content.find("投标人须知前附表")
|
||||
end_idx = min(len(content), start_idx + 10000) # 提取前附表的较大部分
|
||||
contexts.append("=== 投标人须知前附表 ===\n" + content[start_idx:end_idx])
|
||||
|
||||
unique = list(set(contexts))
|
||||
combined = "=== 文档头部信息 ===\n" + header + "\n\n" + "\n\n".join(unique)
|
||||
return combined[:self.max_content]
|
||||
|
||||
# ---------- 响应解析 ----------
|
||||
|
||||
@staticmethod
|
||||
def _parse_json_response(text: str) -> dict:
|
||||
"""从 DeepSeek 返回文本中提取 JSON"""
|
||||
if "```json" in text:
|
||||
text = text.split("```json")[1].split("```")[0]
|
||||
elif "```" in text:
|
||||
text = text.split("```")[1].split("```")[0]
|
||||
elif "{" in text:
|
||||
start = text.find("{")
|
||||
end = text.rfind("}") + 1
|
||||
if start != -1 and end > 0:
|
||||
text = text[start:end]
|
||||
return json.loads(text.strip())
|
||||
|
||||
# ---------- 后处理 ----------
|
||||
|
||||
def _post_process(self, extracted: dict, fields: list,
|
||||
content: str) -> dict:
|
||||
"""对提取结果进行格式校验和后处理"""
|
||||
# 投标截止日格式化
|
||||
if "投标截止日" in extracted:
|
||||
val = extracted["投标截止日"]
|
||||
if val and val != "文档未提及":
|
||||
m = re.search(r'(\d{4})[年/-](\d{1,2})[月/-](\d{1,2})', val)
|
||||
if m:
|
||||
extracted["投标截止日"] = (
|
||||
f"{m.group(1)}-{m.group(2).zfill(2)}-"
|
||||
f"{m.group(3).zfill(2)}")
|
||||
|
||||
# 价格字段清理 + 同步
|
||||
for pf in ("最高限价", "最高投标限价"):
|
||||
if pf in extracted and extracted[pf] != "文档未提及":
|
||||
pm = re.search(r'([\d,]+\.?\d*)\s*(万元|元)', extracted[pf])
|
||||
if pm:
|
||||
extracted[pf] = pm.group(1).replace(",", "") + pm.group(2)
|
||||
|
||||
# 最高限价 ↔ 最高投标限价 同步
|
||||
h1 = extracted.get("最高限价", "文档未提及")
|
||||
h2 = extracted.get("最高投标限价", "文档未提及")
|
||||
if h1 != "文档未提及" and h2 == "文档未提及" and "最高投标限价" in fields:
|
||||
extracted["最高投标限价"] = h1
|
||||
elif h2 != "文档未提及" and h1 == "文档未提及" and "最高限价" in fields:
|
||||
extracted["最高限价"] = h2
|
||||
|
||||
# 文本字段最短长度校验
|
||||
for tf in ("资质要求", "业绩要求", "项目概况", "造价付款方式"):
|
||||
if tf in extracted and extracted[tf] not in ("文档未提及", ""):
|
||||
if len(extracted[tf]) < 3:
|
||||
extracted[tf] = "文档未提及"
|
||||
|
||||
# 跨字段关联:当业绩要求未提取到时,尝试从评分说明中提取
|
||||
if "业绩要求" in extracted and extracted["业绩要求"] == "文档未提及":
|
||||
if "评分说明与资信评分标准" in extracted:
|
||||
score_info = extracted["评分说明与资信评分标准"]
|
||||
# 从评分说明中提取业绩相关信息
|
||||
if "类似工程业绩" in score_info:
|
||||
# 提取业绩信息
|
||||
# 匹配业绩要求的正则表达式
|
||||
performance_pattern = r'类似工程业绩[::]\s*(.*?)(?:;|。|$)'
|
||||
matches = re.findall(performance_pattern, score_info, re.DOTALL)
|
||||
if matches:
|
||||
performance_info = " ".join(matches)
|
||||
# 清理和格式化
|
||||
performance_info = performance_info.strip()
|
||||
if performance_info:
|
||||
extracted["业绩要求"] = performance_info
|
||||
|
||||
return extracted
|
||||
|
||||
# ---------- 本地回退提取 ----------
|
||||
|
||||
@staticmethod
|
||||
def _local_extract(content: str, fields: list) -> dict:
|
||||
"""API 失败时的本地正则回退提取"""
|
||||
result = {}
|
||||
|
||||
field_patterns = {
|
||||
"类型": None, # 特殊处理
|
||||
"投标截止日": [
|
||||
r'投标截止时间[::]\s*(\d{4}年\d{1,2}月\d{1,2}日)',
|
||||
r'投标截止[::]\s*(\d{4}-\d{1,2}-\d{1,2})',
|
||||
r'开标时间[::]\s*(\d{4}年\d{1,2}月\d{1,2}日)',
|
||||
],
|
||||
"招标人": [
|
||||
r'招标人[::]\s*([^\n]+)',
|
||||
r'招标单位[::]\s*([^\n]+)',
|
||||
r'建设单位[::]\s*([^\n]+)',
|
||||
],
|
||||
"有无答辩": None, # 特殊处理
|
||||
"业绩要求": [
|
||||
r'业绩要求[::]\s*([^\n]+)',
|
||||
r'类似工程业绩[::]\s*([^\n]+)',
|
||||
r'投标人业绩[::]\s*([^\n]+)',
|
||||
],
|
||||
}
|
||||
|
||||
for field in fields:
|
||||
if field == "类型":
|
||||
type_kw = {
|
||||
"勘察": ["勘察", "地质", "岩土", "测量"],
|
||||
"设计": ["设计", "规划", "施工图"],
|
||||
"监理": ["监理", "监督"],
|
||||
"EPC": ["EPC"],
|
||||
"采购": ["采购", "设备"],
|
||||
"咨询": ["咨询", "造价", "招标代理"],
|
||||
}
|
||||
matched = "其他"
|
||||
for tname, kws in type_kw.items():
|
||||
if any(k in content[:5000] for k in kws):
|
||||
matched = tname
|
||||
break
|
||||
if matched == "其他" and any(
|
||||
k in content[:5000]
|
||||
for k in ["施工", "建筑", "安装", "市政"]
|
||||
):
|
||||
matched = "施工"
|
||||
result["类型"] = matched
|
||||
|
||||
elif field == "有无答辩":
|
||||
result["有无答辩"] = (
|
||||
"有" if any(k in content for k in ["答辩", "面试", "现场汇报"])
|
||||
else "无"
|
||||
)
|
||||
|
||||
elif field in field_patterns and field_patterns[field]:
|
||||
for pat in field_patterns[field]:
|
||||
m = re.search(pat, content)
|
||||
if m:
|
||||
val = m.group(1).strip()
|
||||
# 日期格式化
|
||||
if field == "投标截止日" and "年" in val:
|
||||
dm = re.search(
|
||||
r'(\d{4})年(\d{1,2})月(\d{1,2})日', val)
|
||||
if dm:
|
||||
val = (f"{dm.group(1)}-"
|
||||
f"{dm.group(2).zfill(2)}-"
|
||||
f"{dm.group(3).zfill(2)}")
|
||||
result[field] = val
|
||||
break
|
||||
|
||||
elif field in ("最高限价", "最高投标限价"):
|
||||
patterns = [
|
||||
r'最高投标限价.*?(\d+(?:\.\d+)?)\s*(万元|元)',
|
||||
r'招标控制价.*?(\d+(?:\.\d+)?)\s*(万元|元)',
|
||||
r'最高限价.*?(\d+(?:\.\d+)?)\s*(万元|元)',
|
||||
r'控制价.*?(\d+(?:\.\d+)?)\s*(万元|元)',
|
||||
]
|
||||
for pat in patterns:
|
||||
m = re.search(pat, content, re.DOTALL)
|
||||
if m:
|
||||
price = m.group(1).replace(",", "") + m.group(2)
|
||||
result["最高限价"] = price
|
||||
result["最高投标限价"] = price
|
||||
break
|
||||
|
||||
# 特别处理业绩要求:从评分标准中提取
|
||||
if "业绩要求" in fields and "业绩要求" not in result:
|
||||
# 搜索评分标准中的业绩要求
|
||||
score_pattern = r'类似工程业绩[::]\s*(.*?)(?:;|。|$)'
|
||||
m = re.search(score_pattern, content, re.DOTALL)
|
||||
if m:
|
||||
result["业绩要求"] = m.group(1).strip()
|
||||
|
||||
return {k: v for k, v in result.items() if v}
|
||||
143
processors/jiandaoyun.py
Normal file
143
processors/jiandaoyun.py
Normal file
@@ -0,0 +1,143 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
简道云数据上传模块
|
||||
"""
|
||||
import logging
|
||||
import re
|
||||
|
||||
import requests
|
||||
import urllib3
|
||||
|
||||
from config import JDY_CONFIG
|
||||
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
logger = logging.getLogger("ztb")
|
||||
|
||||
|
||||
class JiandaoyunUploader:
|
||||
"""简道云 API 上传器"""
|
||||
|
||||
BASE_URL = "https://api.jiandaoyun.com/api/v5"
|
||||
|
||||
# 需要转换为数字的字段
|
||||
NUMERIC_FIELDS = {"最高限价", "最高投标限价", "预估金额"}
|
||||
|
||||
def __init__(self, api_key: str = None):
|
||||
self.api_key = api_key or JDY_CONFIG["api_key"]
|
||||
self.headers = {
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
def upload_records(self, region_name: str, records: list) -> dict:
|
||||
"""
|
||||
上传记录到对应的简道云表单
|
||||
|
||||
Args:
|
||||
region_name: 区域名称(如 "浙江招标文件公示")
|
||||
records: 数据记录列表
|
||||
|
||||
Returns:
|
||||
{"total": N, "success": N, "failed": N}
|
||||
"""
|
||||
form_config = JDY_CONFIG["forms"].get(region_name)
|
||||
if not form_config:
|
||||
logger.warning(f"{region_name}: 未找到简道云表单配置,跳过上传")
|
||||
return {"total": len(records), "success": 0, "failed": len(records)}
|
||||
|
||||
app_id = form_config["app_id"]
|
||||
entry_id = form_config["entry_id"]
|
||||
field_mapping = form_config.get("field_mapping", {})
|
||||
|
||||
success = 0
|
||||
failed = 0
|
||||
|
||||
for i, record in enumerate(records):
|
||||
name = record.get("名称", f"记录{i+1}")
|
||||
try:
|
||||
jdy_data = self._convert(record, field_mapping)
|
||||
if not jdy_data:
|
||||
logger.debug(f"[{i+1}/{len(records)}] {name}: 无有效数据")
|
||||
failed += 1
|
||||
continue
|
||||
|
||||
result = self._create_record(app_id, entry_id, jdy_data)
|
||||
if result and result.get("success"):
|
||||
success += 1
|
||||
if (i + 1) % 10 == 0 or (i + 1) == len(records):
|
||||
logger.info(
|
||||
f" [{i+1}/{len(records)}] 上传进度: "
|
||||
f"成功{success} 失败{failed}")
|
||||
else:
|
||||
failed += 1
|
||||
err = result.get("error", "未知") if result else "无返回"
|
||||
logger.warning(
|
||||
f" [{i+1}/{len(records)}] {name[:25]}: "
|
||||
f"上传失败 - {err}")
|
||||
except Exception as e:
|
||||
failed += 1
|
||||
logger.error(f" [{i+1}/{len(records)}] {name[:25]}: 异常 - {e}")
|
||||
|
||||
logger.info(f" {region_name} 上传完成: 成功 {success}, 失败 {failed}")
|
||||
return {"total": len(records), "success": success, "failed": failed}
|
||||
|
||||
# ---------- 内部方法 ----------
|
||||
|
||||
def _create_record(self, app_id: str, entry_id: str, data: dict) -> dict:
|
||||
"""调用简道云 API 创建单条记录"""
|
||||
url = f"{self.BASE_URL}/app/entry/data/create"
|
||||
payload = {"app_id": app_id, "entry_id": entry_id, "data": data}
|
||||
|
||||
try:
|
||||
resp = requests.post(url, headers=self.headers,
|
||||
json=payload, timeout=30, verify=False)
|
||||
if not resp.text:
|
||||
return {"success": False,
|
||||
"error": f"空响应, status={resp.status_code}"}
|
||||
|
||||
result = resp.json()
|
||||
if resp.status_code == 200 and result.get("data", {}).get("_id"):
|
||||
return {"success": True,
|
||||
"data_id": result["data"]["_id"]}
|
||||
return {"success": False,
|
||||
"error": result.get("msg", str(result))}
|
||||
except Exception as e:
|
||||
return {"success": False, "error": str(e)}
|
||||
|
||||
def _convert(self, record: dict, field_mapping: dict) -> dict:
|
||||
"""将记录转换为简道云 API 格式"""
|
||||
jdy_data = {}
|
||||
for local_field, jdy_field in field_mapping.items():
|
||||
value = record.get(local_field)
|
||||
if not value or value in ("文档未提及", "详见公告"):
|
||||
continue
|
||||
|
||||
if local_field in self.NUMERIC_FIELDS:
|
||||
num = self._parse_price(value)
|
||||
if num is not None:
|
||||
jdy_data[jdy_field] = {"value": num}
|
||||
else:
|
||||
jdy_data[jdy_field] = {"value": value}
|
||||
return jdy_data
|
||||
|
||||
@staticmethod
|
||||
def _parse_price(price_str) -> int | None:
|
||||
"""将价格字符串转为纯数字(元)"""
|
||||
if not price_str or price_str in ("文档未提及", "详见公告"):
|
||||
return None
|
||||
|
||||
s = str(price_str).strip()
|
||||
s = re.sub(r'^[约≈大概]*', '', s)
|
||||
s = re.sub(r'[((].*?[))]', '', s)
|
||||
s = re.sub(r'[元人民币¥¥\s]', '', s)
|
||||
|
||||
try:
|
||||
if "亿" in s:
|
||||
return int(float(s.replace("亿", "")) * 100_000_000)
|
||||
elif "万" in s:
|
||||
return int(float(s.replace("万", "")) * 10_000)
|
||||
else:
|
||||
s = s.replace(",", "").replace(",", "")
|
||||
return int(float(s))
|
||||
except (ValueError, TypeError):
|
||||
return None
|
||||
202
processors/pipeline.py
Normal file
202
processors/pipeline.py
Normal file
@@ -0,0 +1,202 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
处理管道 - 将爬虫结果经 DeepSeek AI 处理后上传简道云
|
||||
"""
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
from config import REGION_CONFIGS, PROCESSING_CONFIG
|
||||
from .content_fetcher import ContentFetcher
|
||||
from .deepseek import DeepSeekProcessor
|
||||
from .jiandaoyun import JiandaoyunUploader
|
||||
|
||||
logger = logging.getLogger("ztb")
|
||||
|
||||
|
||||
class ProcessingPipeline:
|
||||
"""处理管道:爬虫结果 → 内容获取 → AI提取 → 上传简道云"""
|
||||
|
||||
def __init__(self):
|
||||
temp_dir = PROCESSING_CONFIG.get("temp_dir", "temp_files")
|
||||
self.fetcher = ContentFetcher(temp_dir=temp_dir)
|
||||
self.deepseek = DeepSeekProcessor()
|
||||
self.uploader = JiandaoyunUploader()
|
||||
self.api_delay = PROCESSING_CONFIG.get("api_delay", 1)
|
||||
self.output_dir = PROCESSING_CONFIG.get("output_dir", "data")
|
||||
|
||||
def process_results(self, results: list, site: str,
|
||||
notice_type: str, upload: bool = False) -> list:
|
||||
"""
|
||||
处理爬虫结果:字段映射 → 内容获取 → AI提取 → 可选上传
|
||||
|
||||
Args:
|
||||
results: 爬虫返回的结果列表
|
||||
site: 站点标识 ("zhejiang" / "taizhou")
|
||||
notice_type: 公告类型 (如 "招标文件公示")
|
||||
upload: 是否上传简道云
|
||||
|
||||
Returns:
|
||||
处理后的记录列表
|
||||
"""
|
||||
# 查找区域配置
|
||||
region_key = f"{site}:{notice_type}"
|
||||
region_cfg = REGION_CONFIGS.get(region_key)
|
||||
if not region_cfg:
|
||||
logger.warning(f"未找到区域配置: {region_key},跳过AI处理")
|
||||
return results
|
||||
|
||||
region_name = region_cfg["region_name"]
|
||||
link_field = region_cfg["link_field"]
|
||||
ai_fields = region_cfg["ai_fields"]
|
||||
|
||||
logger.info(f"开始AI处理: {region_name}, {len(results)} 条记录")
|
||||
logger.info(f" 需要提取的字段: {ai_fields}")
|
||||
|
||||
processed = []
|
||||
success_count = 0
|
||||
fail_count = 0
|
||||
|
||||
for i, item in enumerate(results):
|
||||
# 1. 字段映射:爬虫字段 → 处理字段
|
||||
record = self._map_fields(item, link_field, notice_type)
|
||||
name = record.get("名称", "未知")[:35]
|
||||
logger.info(f" [{i+1}/{len(results)}] {name}")
|
||||
|
||||
# 2. 获取全文内容
|
||||
url = record.get(link_field, "")
|
||||
if not url:
|
||||
logger.warning(f" 无详情链接,跳过")
|
||||
processed.append(record)
|
||||
fail_count += 1
|
||||
continue
|
||||
|
||||
content = self.fetcher.get_full_content(url)
|
||||
if not content or len(content) < 200:
|
||||
logger.warning(
|
||||
f" 内容过少({len(content) if content else 0}字符),跳过")
|
||||
processed.append(record)
|
||||
fail_count += 1
|
||||
continue
|
||||
|
||||
logger.info(f" 获取到 {len(content)} 字符内容")
|
||||
|
||||
# 3. DeepSeek 提取
|
||||
extracted = self.deepseek.extract_fields(
|
||||
content, ai_fields, region_name)
|
||||
|
||||
# 4. 提取发布时间(从content中)
|
||||
import re
|
||||
publish_time_match = re.search(r'发布时间:\s*(.*?)\n', content)
|
||||
if publish_time_match:
|
||||
extracted_publish_time = publish_time_match.group(1).strip()
|
||||
# 如果提取到了更详细的发布时间(包含时分秒),更新记录
|
||||
if extracted_publish_time:
|
||||
record["发布时间"] = extracted_publish_time
|
||||
record["项目发布时间"] = extracted_publish_time # 同时更新项目发布时间,确保一致性
|
||||
logger.info(f" ✓ 发布时间: {extracted_publish_time}")
|
||||
|
||||
# 5. 合并结果(AI 优先,原有值保底)
|
||||
for field in ai_fields:
|
||||
# 保留原始的项目名称、项目批准文号和批准文号,不被AI覆盖
|
||||
if field in ["项目名称", "项目批准文号", "批准文号"] and record.get(field):
|
||||
logger.debug(f" 保留原始 {field}: {record[field][:50]}")
|
||||
continue
|
||||
|
||||
ai_val = extracted.get(field, "")
|
||||
if ai_val and ai_val != "文档未提及":
|
||||
record[field] = ai_val
|
||||
logger.info(f" ✓ {field}: {ai_val[:50]}")
|
||||
elif not record.get(field):
|
||||
record[field] = ai_val or "文档未提及"
|
||||
logger.debug(f" ○ {field}: {record[field]}")
|
||||
|
||||
# 处理最高限价字段:优先使用最高投标限价,为空时使用最高限价
|
||||
max_price = record.get("最高投标限价", "")
|
||||
if not max_price:
|
||||
max_price = record.get("最高限价", "")
|
||||
if max_price:
|
||||
record["最高投标限价"] = max_price
|
||||
record["最高限价"] = max_price
|
||||
|
||||
processed.append(record)
|
||||
success_count += 1
|
||||
|
||||
# API 限流
|
||||
time.sleep(self.api_delay)
|
||||
|
||||
logger.info(f"AI处理完成: 成功 {success_count}, 失败 {fail_count}")
|
||||
|
||||
# 保存 AI 处理结果
|
||||
self._save_results(processed, region_name)
|
||||
|
||||
# 上传简道云
|
||||
if upload:
|
||||
logger.info(f"开始上传 {region_name} 到简道云...")
|
||||
self.uploader.upload_records(region_name, processed)
|
||||
|
||||
return processed
|
||||
|
||||
# ---------- 字段映射 ----------
|
||||
|
||||
@staticmethod
|
||||
def _map_fields(item: dict, link_field: str,
|
||||
notice_type: str) -> dict:
|
||||
"""将爬虫输出字段映射为处理所需字段"""
|
||||
record = {}
|
||||
|
||||
# 基础字段映射
|
||||
record["名称"] = item.get("标题", item.get("项目名称", ""))
|
||||
pub_date = item.get("发布日期", item.get("项目发布时间", ""))
|
||||
record["发布时间"] = pub_date
|
||||
# 项目发布时间修复:使用与发布时间相同的值,确保格式一致
|
||||
record["项目发布时间"] = pub_date # 台州招标计划 JDY 使用此字段名
|
||||
record["地区"] = item.get("地区", "")
|
||||
record["招标阶段"] = item.get("公告类型", notice_type)
|
||||
record["来源"] = item.get("来源", "")
|
||||
|
||||
# 链接字段:根据公告类型映射
|
||||
record[link_field] = item.get("链接", "")
|
||||
|
||||
# 保留爬虫已提取的额外字段
|
||||
extra_fields = [
|
||||
"项目名称", "项目代码", "招标人", "招标代理",
|
||||
"项目批准文号", "项目类型", "预估合同金额(万元)",
|
||||
"计划招标时间", "联系电话", "招标估算金额",
|
||||
]
|
||||
for f in extra_fields:
|
||||
if f in item and item[f]:
|
||||
record[f] = item[f]
|
||||
|
||||
# 别名映射
|
||||
if "项目批准文号" in record:
|
||||
record.setdefault("批准文号", record["项目批准文号"])
|
||||
if "项目类型" in record:
|
||||
record.setdefault("类型", record["项目类型"])
|
||||
if "预估合同金额(万元)" in record:
|
||||
val = record["预估合同金额(万元)"]
|
||||
record.setdefault("预估金额", f"{val}万元" if val else "")
|
||||
if "计划招标时间" in record:
|
||||
record.setdefault("招标时间", record["计划招标时间"])
|
||||
|
||||
return record
|
||||
|
||||
# ---------- 结果保存 ----------
|
||||
|
||||
def _save_results(self, records: list, region_name: str):
|
||||
"""保存 AI 处理结果为 JSON"""
|
||||
os.makedirs(self.output_dir, exist_ok=True)
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
filepath = os.path.join(
|
||||
self.output_dir, f"{region_name}_AI处理_{timestamp}.json")
|
||||
|
||||
with open(filepath, "w", encoding="utf-8") as f:
|
||||
json.dump({
|
||||
"处理时间": datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
|
||||
"总记录数": len(records),
|
||||
"data": records,
|
||||
}, f, ensure_ascii=False, indent=2)
|
||||
|
||||
logger.info(f"AI处理结果已保存: {filepath}")
|
||||
4
requirements.txt
Normal file
4
requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
requests>=2.31.0
|
||||
beautifulsoup4>=4.12.0
|
||||
pdfplumber>=0.10.0
|
||||
python-docx>=1.0.0
|
||||
151
scheduler.py
Normal file
151
scheduler.py
Normal file
@@ -0,0 +1,151 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
r"""
|
||||
定时爬取入口 —— 每天自动采集前一天的数据
|
||||
|
||||
使用方式:
|
||||
1. 直接运行(单次采集昨天数据):
|
||||
python scheduler.py
|
||||
|
||||
2. Windows 计划任务(每天早上 8:00 自动运行):
|
||||
schtasks /create /tn "ZTB_Spider" /tr "python <项目路径>\scheduler.py" /sc daily /st 08:00
|
||||
|
||||
3. Linux cron(每天早上 8:00):
|
||||
0 8 * * * cd /path/to/ztb && python scheduler.py >> logs/cron.log 2>&1
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
import traceback
|
||||
from datetime import datetime
|
||||
|
||||
# 确保项目根目录在 sys.path 中
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from config import ZHEJIANG_CONFIG, TAIZHOU_CONFIG, SPIDER_CONFIG, DATA_DIR
|
||||
from spiders import ZhejiangSpider, TaizhouSpider
|
||||
from spiders.base import setup_logging
|
||||
|
||||
logger = logging.getLogger("ztb")
|
||||
|
||||
|
||||
# ============ 爬取任务配置 ============
|
||||
# 在这里定义每天要跑哪些任务
|
||||
|
||||
DAILY_TASKS = [
|
||||
# 浙江省 - 工程建设 - 招标文件公示
|
||||
{
|
||||
"site": "zhejiang",
|
||||
"max_pages": 100,
|
||||
"category": "工程建设",
|
||||
"notice_type": "招标文件公示",
|
||||
"process": True,
|
||||
"upload": True,
|
||||
},
|
||||
# 浙江省 - 工程建设 - 招标公告
|
||||
{
|
||||
"site": "zhejiang",
|
||||
"max_pages": 100,
|
||||
"category": "工程建设",
|
||||
"notice_type": "招标公告",
|
||||
"process": True,
|
||||
"upload": True,
|
||||
},
|
||||
# 浙江省 - 工程建设 - 澄清修改
|
||||
{
|
||||
"site": "zhejiang",
|
||||
"max_pages": 100,
|
||||
"category": "工程建设",
|
||||
"notice_type": "澄清修改",
|
||||
"process": True,
|
||||
"upload": True,
|
||||
},
|
||||
# 台州 - 工程建设 - 招标文件公示
|
||||
{
|
||||
"site": "taizhou",
|
||||
"max_pages": 100,
|
||||
"category": "工程建设",
|
||||
"notice_type": "招标文件公示",
|
||||
"process": True,
|
||||
"upload": True,
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
def run_task(task: dict, date_filter: str = "yesterday") -> int:
|
||||
"""执行单个爬取任务,返回采集条数"""
|
||||
site = task["site"]
|
||||
max_pages = task.get("max_pages", 10)
|
||||
category = task.get("category")
|
||||
notice_type = task.get("notice_type")
|
||||
|
||||
if site == "zhejiang":
|
||||
config = ZHEJIANG_CONFIG
|
||||
spider = ZhejiangSpider(config, SPIDER_CONFIG, DATA_DIR)
|
||||
elif site == "taizhou":
|
||||
config = TAIZHOU_CONFIG
|
||||
spider = TaizhouSpider(config, SPIDER_CONFIG, DATA_DIR)
|
||||
else:
|
||||
logger.error(f"未知站点: {site}")
|
||||
return 0
|
||||
|
||||
spider.crawl(
|
||||
max_pages=max_pages,
|
||||
category=category,
|
||||
notice_type=notice_type,
|
||||
date_filter=date_filter,
|
||||
)
|
||||
spider.save_to_csv()
|
||||
|
||||
# AI 处理 + 简道云上传
|
||||
if task.get("process") and spider.results and notice_type:
|
||||
from processors import ProcessingPipeline
|
||||
pipeline = ProcessingPipeline()
|
||||
pipeline.process_results(
|
||||
spider.results,
|
||||
site=site,
|
||||
notice_type=notice_type,
|
||||
upload=task.get("upload", False),
|
||||
)
|
||||
|
||||
return len(spider.results)
|
||||
|
||||
|
||||
def run_daily():
|
||||
"""执行每日定时任务"""
|
||||
setup_logging()
|
||||
start = datetime.now()
|
||||
logger.info("=" * 40)
|
||||
logger.info(f"定时任务启动: {start.strftime('%Y-%m-%d %H:%M:%S')}")
|
||||
logger.info(f"共 {len(DAILY_TASKS)} 个任务")
|
||||
logger.info("=" * 40)
|
||||
|
||||
total = 0
|
||||
errors = []
|
||||
|
||||
for i, task in enumerate(DAILY_TASKS, 1):
|
||||
desc = f"{task['site']} / {task.get('category', '全部')}"
|
||||
if task.get("notice_type"):
|
||||
desc += f" / {task['notice_type']}"
|
||||
|
||||
logger.info(f"[{i}/{len(DAILY_TASKS)}] {desc}")
|
||||
try:
|
||||
count = run_task(task)
|
||||
total += count
|
||||
logger.info(f"[{i}/{len(DAILY_TASKS)}] 完成,{count} 条")
|
||||
except Exception as e:
|
||||
logger.error(f"[{i}/{len(DAILY_TASKS)}] 失败: {e}")
|
||||
logger.debug(traceback.format_exc())
|
||||
errors.append(desc)
|
||||
|
||||
elapsed = (datetime.now() - start).total_seconds()
|
||||
logger.info("=" * 40)
|
||||
logger.info(f"定时任务完成: 共 {total} 条, 耗时 {elapsed:.0f}s")
|
||||
if errors:
|
||||
logger.error(f"失败任务: {', '.join(errors)}")
|
||||
logger.info("=" * 40)
|
||||
|
||||
return total, errors
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
run_daily()
|
||||
5
spiders/__init__.py
Normal file
5
spiders/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from .zhejiang import ZhejiangSpider
|
||||
from .taizhou import TaizhouSpider
|
||||
|
||||
__all__ = ['ZhejiangSpider', 'TaizhouSpider']
|
||||
229
spiders/base.py
Normal file
229
spiders/base.py
Normal file
@@ -0,0 +1,229 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
爬虫基类 - 基于 requests
|
||||
"""
|
||||
import csv
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime
|
||||
from abc import ABC, abstractmethod
|
||||
from logging.handlers import RotatingFileHandler
|
||||
|
||||
import requests
|
||||
|
||||
logger = logging.getLogger("ztb")
|
||||
|
||||
|
||||
def setup_logging(log_dir: str = "logs", level: int = logging.INFO):
|
||||
"""配置日志系统:文件 + 控制台"""
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
root = logging.getLogger("ztb")
|
||||
if root.handlers: # 避免重复初始化
|
||||
return root
|
||||
root.setLevel(level)
|
||||
|
||||
fmt = logging.Formatter(
|
||||
"%(asctime)s [%(levelname)s] %(message)s",
|
||||
datefmt="%Y-%m-%d %H:%M:%S",
|
||||
)
|
||||
|
||||
# 文件日志:自动轮转,单文件 5MB,保留 5 个
|
||||
fh = RotatingFileHandler(
|
||||
os.path.join(log_dir, "spider.log"),
|
||||
maxBytes=5 * 1024 * 1024,
|
||||
backupCount=5,
|
||||
encoding="utf-8",
|
||||
)
|
||||
fh.setLevel(logging.DEBUG)
|
||||
fh.setFormatter(fmt)
|
||||
root.addHandler(fh)
|
||||
|
||||
# 控制台:只输出 INFO 以上
|
||||
ch = logging.StreamHandler()
|
||||
ch.setLevel(logging.INFO)
|
||||
ch.setFormatter(fmt)
|
||||
root.addHandler(ch)
|
||||
|
||||
return root
|
||||
|
||||
|
||||
class BaseSpider(ABC):
|
||||
"""爬虫基类"""
|
||||
|
||||
def __init__(self, config: dict, spider_config: dict, data_dir: str):
|
||||
self.config = config
|
||||
self.spider_config = spider_config
|
||||
self.data_dir = data_dir
|
||||
self.results = []
|
||||
self._seen_urls = set() # 去重
|
||||
|
||||
# 安全计数器
|
||||
self._total_requests = 0
|
||||
self._consecutive_errors = 0
|
||||
self._stopped = False
|
||||
self._start_time = time.time()
|
||||
self._minute_requests = [] # 每分钟请求时间戳
|
||||
|
||||
# HTTP 会话
|
||||
self.session = requests.Session()
|
||||
self.session.headers.update({
|
||||
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) "
|
||||
"Chrome/120.0.0.0 Safari/537.36",
|
||||
"Accept": "text/html,application/xhtml+xml,application/xml;"
|
||||
"q=0.9,image/webp,*/*;q=0.8",
|
||||
"Accept-Language": "zh-CN,zh;q=0.9,en;q=0.8",
|
||||
})
|
||||
|
||||
# 注册优雅退出
|
||||
signal.signal(signal.SIGINT, self._handle_stop)
|
||||
signal.signal(signal.SIGTERM, self._handle_stop)
|
||||
|
||||
# ---------- 安全机制 ----------
|
||||
|
||||
def _handle_stop(self, signum, frame):
|
||||
"""捕获中断信号,保存已采集数据后退出"""
|
||||
logger.warning("收到中断信号,正在保存已采集数据...")
|
||||
self._stopped = True
|
||||
self.save_to_csv()
|
||||
sys.exit(0)
|
||||
|
||||
def _check_limits(self) -> bool:
|
||||
"""检查是否超出安全阈值,返回 True 表示应停止"""
|
||||
max_req = self.spider_config.get("max_total_requests", 300)
|
||||
if self._total_requests >= max_req:
|
||||
logger.warning(f"达到最大请求数 ({max_req}),停止爬取")
|
||||
return True
|
||||
|
||||
max_err = self.spider_config.get("max_consecutive_errors", 5)
|
||||
if self._consecutive_errors >= max_err:
|
||||
logger.error(f"连续失败 {max_err} 次,触发熔断")
|
||||
return True
|
||||
|
||||
return self._stopped
|
||||
|
||||
# ---------- 网络请求 ----------
|
||||
|
||||
def _throttle(self):
|
||||
"""每分钟请求数限制,超出则等待"""
|
||||
rpm_limit = self.spider_config.get("requests_per_minute", 10)
|
||||
now = time.time()
|
||||
# 清理 60s 以前的时间戳
|
||||
self._minute_requests = [t for t in self._minute_requests if now - t < 60]
|
||||
if len(self._minute_requests) >= rpm_limit:
|
||||
wait = 60 - (now - self._minute_requests[0]) + random.uniform(1, 3)
|
||||
if wait > 0:
|
||||
logger.info(f"达到速率限制 ({rpm_limit}次/分钟),等待 {wait:.0f}s...")
|
||||
time.sleep(wait)
|
||||
self._minute_requests.append(time.time())
|
||||
|
||||
def fetch(self, url: str, method: str = "GET", **kwargs) -> requests.Response | None:
|
||||
"""
|
||||
带重试、限速和安全检查的 HTTP 请求
|
||||
"""
|
||||
if self._check_limits():
|
||||
return None
|
||||
|
||||
self._throttle()
|
||||
|
||||
timeout = kwargs.pop("timeout", self.spider_config.get("timeout", 30))
|
||||
max_retries = self.spider_config.get("max_retries", 3)
|
||||
|
||||
for attempt in range(1, max_retries + 1):
|
||||
try:
|
||||
self._total_requests += 1
|
||||
resp = self.session.request(method, url, timeout=timeout, **kwargs)
|
||||
resp.raise_for_status()
|
||||
|
||||
# 检测被拦截的空响应(反爬虒返回 200 但 body 为空)
|
||||
if len(resp.content) <= 10 and "json" not in resp.headers.get("Content-Type", ""):
|
||||
self._consecutive_errors += 1
|
||||
logger.warning(f"检测到空响应 ({len(resp.content)} bytes),可能被反爬")
|
||||
if attempt < max_retries:
|
||||
wait = 10 * attempt + random.uniform(5, 10)
|
||||
logger.info(f"疑似被反爬拦截,等待 {wait:.0f}s 后重试...")
|
||||
time.sleep(wait)
|
||||
continue
|
||||
return None
|
||||
|
||||
self._consecutive_errors = 0
|
||||
return resp
|
||||
except requests.RequestException as e:
|
||||
self._consecutive_errors += 1
|
||||
wait = 2 ** attempt + random.random()
|
||||
logger.warning(f"请求失败 ({attempt}/{max_retries}): {e},{wait:.1f}s 后重试")
|
||||
if attempt < max_retries:
|
||||
time.sleep(wait)
|
||||
|
||||
logger.error(f"请求失败,已达最大重试次数: {url[:80]}")
|
||||
return None
|
||||
|
||||
def delay(self):
|
||||
"""列表页之间的随机延迟"""
|
||||
lo = self.spider_config.get("delay_min", 3)
|
||||
hi = self.spider_config.get("delay_max", 6)
|
||||
time.sleep(random.uniform(lo, hi))
|
||||
|
||||
def detail_delay(self):
|
||||
"""详情页请求前的随机延迟"""
|
||||
lo = self.spider_config.get("detail_delay_min", 2)
|
||||
hi = self.spider_config.get("detail_delay_max", 5)
|
||||
time.sleep(random.uniform(lo, hi))
|
||||
|
||||
def print_stats(self):
|
||||
"""输出爬取统计"""
|
||||
elapsed = time.time() - self._start_time
|
||||
rpm = self._total_requests / max(elapsed / 60, 0.1)
|
||||
logger.info(f"[统计] 总请求: {self._total_requests}, "
|
||||
f"耗时: {elapsed:.0f}s, 速率: {rpm:.1f}次/分钟")
|
||||
|
||||
# ---------- 去重 ----------
|
||||
|
||||
def is_duplicate(self, url: str) -> bool:
|
||||
"""基于 URL 去重"""
|
||||
if url in self._seen_urls:
|
||||
return True
|
||||
self._seen_urls.add(url)
|
||||
return False
|
||||
|
||||
# ---------- 数据存储 ----------
|
||||
|
||||
def save_to_csv(self, filename: str = None):
|
||||
"""保存数据到 CSV"""
|
||||
if not self.results:
|
||||
logger.info("没有数据可保存")
|
||||
return
|
||||
|
||||
if not filename:
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
filename = f"{self.config['name']}_{timestamp}.csv"
|
||||
|
||||
filepath = os.path.join(self.data_dir, filename)
|
||||
os.makedirs(self.data_dir, exist_ok=True)
|
||||
|
||||
# 汇总所有字段
|
||||
all_keys = []
|
||||
seen = set()
|
||||
for row in self.results:
|
||||
for k in row:
|
||||
if k not in seen:
|
||||
all_keys.append(k)
|
||||
seen.add(k)
|
||||
|
||||
with open(filepath, "w", newline="", encoding="utf-8-sig") as f:
|
||||
writer = csv.DictWriter(f, fieldnames=all_keys, extrasaction="ignore")
|
||||
writer.writeheader()
|
||||
writer.writerows(self.results)
|
||||
|
||||
logger.info(f"数据已保存到: {filepath} (共 {len(self.results)} 条记录)")
|
||||
|
||||
# ---------- 抽象方法 ----------
|
||||
|
||||
@abstractmethod
|
||||
def crawl(self, max_pages: int = None, **kwargs):
|
||||
"""执行爬取,子类实现"""
|
||||
pass
|
||||
360
spiders/taizhou.py
Normal file
360
spiders/taizhou.py
Normal file
@@ -0,0 +1,360 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
台州公共资源交易中心爬虫 —— 基于 API + requests
|
||||
"""
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
from datetime import datetime, timedelta
|
||||
from bs4 import BeautifulSoup
|
||||
from .base import BaseSpider
|
||||
from utils.attachment import AttachmentHandler
|
||||
|
||||
logger = logging.getLogger("ztb")
|
||||
|
||||
|
||||
class TaizhouSpider(BaseSpider):
|
||||
"""台州公共资源交易中心爬虫"""
|
||||
|
||||
# ---------- 列表数据 ----------
|
||||
|
||||
def _build_list_url(self, category_code: str, notice_code: str, page_num: int) -> str:
|
||||
"""构建列表页 URL(SSR 页面,页 1-6)"""
|
||||
base = self.config["base_url"]
|
||||
if notice_code:
|
||||
if category_code:
|
||||
path = f"/jyxx/{category_code}/{notice_code}"
|
||||
else:
|
||||
# 当只有notice_code时,直接使用/jyxx/{notice_code}
|
||||
path = f"/jyxx/{notice_code}"
|
||||
elif category_code:
|
||||
path = f"/jyxx/{category_code}"
|
||||
else:
|
||||
path = "/jyxx"
|
||||
|
||||
if page_num <= 1:
|
||||
return f"{base}{path}/trade_infor.html"
|
||||
else:
|
||||
return f"{base}{path}/{page_num}.html"
|
||||
|
||||
def fetch_list_via_api(self, page_index: int, page_size: int,
|
||||
category_num: str, start_date: str = "",
|
||||
end_date: str = "") -> list:
|
||||
"""通过 API 获取列表(第 7 页以后)"""
|
||||
resp = self.fetch(
|
||||
self.config["api_url"],
|
||||
method="POST",
|
||||
data={
|
||||
"siteGuid": self.config["site_guid"],
|
||||
"categoryNum": category_num,
|
||||
"content": "",
|
||||
"pageIndex": page_index,
|
||||
"pageSize": page_size,
|
||||
"YZM": "",
|
||||
"ImgGuid": "",
|
||||
"startdate": start_date,
|
||||
"enddate": end_date,
|
||||
"xiaqucode": "",
|
||||
"projectjiaoyitype": "",
|
||||
"jytype": "",
|
||||
"zhuanzai": "",
|
||||
},
|
||||
)
|
||||
if resp is None:
|
||||
return []
|
||||
|
||||
try:
|
||||
data = resp.json()
|
||||
return data.get("custom", {}).get("infodata", [])
|
||||
except Exception as e:
|
||||
logger.error(f"解析 API 响应失败: {e}")
|
||||
return []
|
||||
|
||||
def parse_html_list(self, html: str) -> list:
|
||||
"""解析 SSR 列表页 HTML"""
|
||||
soup = BeautifulSoup(html, "html.parser")
|
||||
items = []
|
||||
for a in soup.select("a.public-list-item"):
|
||||
title = a.get("title", "").strip()
|
||||
href = a.get("href", "")
|
||||
if href and not href.startswith("http"):
|
||||
href = self.config["base_url"] + href
|
||||
|
||||
date_el = a.select_one("span.date")
|
||||
date = date_el.text.strip() if date_el else ""
|
||||
|
||||
region_el = a.select_one("span.xiaquclass")
|
||||
region = region_el.text.strip().strip("【】") if region_el else ""
|
||||
|
||||
item = {
|
||||
"标题": title,
|
||||
"发布日期": date,
|
||||
"地区": region,
|
||||
"链接": href,
|
||||
"来源": self.config["name"],
|
||||
}
|
||||
|
||||
# 解析特定格式的标题:[招标文件]项目名称[批准文号]
|
||||
title_pattern = r"(?:\[招标文件\])?\s*(.*)\s*\[([A-Z0-9]+)\]\s*$"
|
||||
match = re.search(title_pattern, title)
|
||||
if match:
|
||||
item["项目名称"] = match.group(1).strip()
|
||||
item["项目批准文号"] = match.group(2).strip()
|
||||
else:
|
||||
# 如果正则匹配失败,直接使用标题作为项目名称
|
||||
project_name = title
|
||||
# 尝试从标题中提取批准文号
|
||||
number_pattern = r"\[([A-Z0-9]+)\]\s*$"
|
||||
match = re.search(number_pattern, project_name)
|
||||
if match:
|
||||
item["项目批准文号"] = match.group(1).strip()
|
||||
# 从项目名称中删除批准文号部分
|
||||
project_name = project_name[:match.start()].strip()
|
||||
item["项目名称"] = project_name
|
||||
|
||||
if title and href:
|
||||
items.append(item)
|
||||
return items
|
||||
|
||||
def parse_api_list(self, records: list) -> list:
|
||||
"""解析 API 返回的列表数据"""
|
||||
items = []
|
||||
for rec in records:
|
||||
title = rec.get("title2") or rec.get("title", "")
|
||||
href = rec.get("infourl", "")
|
||||
if href and not href.startswith("http"):
|
||||
href = self.config["base_url"] + href
|
||||
|
||||
item = {
|
||||
"标题": title.strip(),
|
||||
"发布日期": rec.get("infodate", ""),
|
||||
"地区": rec.get("xiaquname", "").strip("【】"),
|
||||
"链接": href,
|
||||
"来源": self.config["name"],
|
||||
}
|
||||
|
||||
# 解析特定格式的标题:[招标文件]项目名称[批准文号]
|
||||
title_pattern = r"(?:\[招标文件\])?\s*(.*)\s*\[([A-Z0-9]+)\]\s*$"
|
||||
match = re.search(title_pattern, title)
|
||||
if match:
|
||||
item["项目名称"] = match.group(1).strip()
|
||||
item["项目批准文号"] = match.group(2).strip()
|
||||
else:
|
||||
# 如果正则匹配失败,直接使用标题作为项目名称
|
||||
project_name = title
|
||||
# 尝试从标题中提取批准文号
|
||||
number_pattern = r"\[([A-Z0-9]+)\]\s*$"
|
||||
match = re.search(number_pattern, project_name)
|
||||
if match:
|
||||
item["项目批准文号"] = match.group(1).strip()
|
||||
# 从项目名称中删除批准文号部分
|
||||
project_name = project_name[:match.start()].strip()
|
||||
item["项目名称"] = project_name
|
||||
|
||||
items.append(item)
|
||||
return items
|
||||
|
||||
# ---------- 详情页 ----------
|
||||
|
||||
def parse_detail(self, url: str) -> dict:
|
||||
"""解析详情页"""
|
||||
resp = self.fetch(url)
|
||||
if resp is None:
|
||||
return {}
|
||||
|
||||
detail = {}
|
||||
soup = BeautifulSoup(resp.text, "html.parser")
|
||||
|
||||
# 解析表格字段
|
||||
field_map = {
|
||||
"项目名称": "项目名称",
|
||||
"联系人": "联系人",
|
||||
"联系方式": "联系方式",
|
||||
"建设单位(招标人)": "招标人",
|
||||
"建设单位(招标人)": "招标人",
|
||||
"项目批准文件及文号": "项目批准文号",
|
||||
"项目类型": "项目类型",
|
||||
"招标方式": "招标方式",
|
||||
"主要建设内容": "主要建设内容",
|
||||
}
|
||||
|
||||
for row in soup.select("table tr"):
|
||||
cells = row.select("td")
|
||||
if len(cells) >= 2:
|
||||
key = cells[0].get_text(strip=True)
|
||||
value = cells[1].get_text(strip=True)
|
||||
if key in field_map and value:
|
||||
detail[field_map[key]] = value
|
||||
if len(cells) >= 4:
|
||||
key2 = cells[2].get_text(strip=True)
|
||||
value2 = cells[3].get_text(strip=True)
|
||||
if key2 == "联系方式" and value2:
|
||||
detail["联系方式"] = value2
|
||||
|
||||
# 招标项目表(计划招标时间 / 预估合同金额)
|
||||
for table in soup.select("table"):
|
||||
headers = [th.get_text(strip=True) for th in table.select("th")]
|
||||
if "计划招标时间" in headers:
|
||||
data_rows = table.select("tbody tr") or [
|
||||
r for r in table.select("tr") if r.select("td")
|
||||
]
|
||||
if data_rows:
|
||||
cells = data_rows[0].select("td")
|
||||
for i, h in enumerate(headers):
|
||||
if i < len(cells):
|
||||
val = cells[i].get_text(strip=True)
|
||||
if h == "计划招标时间" and val:
|
||||
detail["计划招标时间"] = val
|
||||
elif "预估合同金额" in h and val:
|
||||
detail["预估合同金额(万元)"] = val
|
||||
break
|
||||
|
||||
return detail
|
||||
|
||||
# ---------- 附件 ----------
|
||||
|
||||
def _extract_attachments(self, url: str) -> list:
|
||||
"""从详情页提取附件链接"""
|
||||
resp = self.fetch(url)
|
||||
if resp is None:
|
||||
return []
|
||||
|
||||
attachments = []
|
||||
for href in re.findall(r'href=["\']([^"\']*\.pdf[^"\']*)', resp.text):
|
||||
if not href.startswith("http"):
|
||||
href = self.config["base_url"] + href
|
||||
attachments.append({"name": href.split("/")[-1], "url": href})
|
||||
for href in re.findall(r'href=["\']([^"\']*\.docx?[^"\']*)', resp.text):
|
||||
if not href.startswith("http"):
|
||||
href = self.config["base_url"] + href
|
||||
attachments.append({"name": href.split("/")[-1], "url": href})
|
||||
return attachments
|
||||
|
||||
# ---------- 主流程 ----------
|
||||
|
||||
def crawl(self, max_pages: int = None, category: str = None,
|
||||
notice_type: str = None, date_filter: str = None,
|
||||
download_attachment: bool = False, **kwargs):
|
||||
"""
|
||||
执行爬取
|
||||
|
||||
Args:
|
||||
max_pages: 最大爬取页数
|
||||
category: 交易领域
|
||||
notice_type: 公告类型
|
||||
date_filter: 日期过滤
|
||||
download_attachment: 是否下载附件
|
||||
"""
|
||||
if max_pages is None:
|
||||
max_pages = self.spider_config.get("max_pages", 10)
|
||||
page_size = 10 # 台州站固定每页 10 条
|
||||
|
||||
# 日期过滤
|
||||
target_date = None
|
||||
start_date = end_date = ""
|
||||
if date_filter == "yesterday":
|
||||
d = datetime.now() - timedelta(days=1)
|
||||
target_date = d.strftime("%Y-%m-%d")
|
||||
start_date = target_date + " 00:00:00"
|
||||
end_date = target_date + " 23:59:59"
|
||||
logger.info(f"过滤日期: {target_date}(昨天)")
|
||||
elif date_filter:
|
||||
target_date = date_filter
|
||||
start_date = target_date + " 00:00:00"
|
||||
end_date = target_date + " 23:59:59"
|
||||
logger.info(f"过滤日期: {target_date}")
|
||||
|
||||
category_code = self.config.get("categories", {}).get(category, "")
|
||||
notice_code = self.config.get("notice_types", {}).get(notice_type, "")
|
||||
category_num = notice_code or category_code or "002"
|
||||
|
||||
# 附件
|
||||
attachment_handler = None
|
||||
if download_attachment:
|
||||
attachment_dir = os.path.join(self.data_dir, "attachments")
|
||||
attachment_handler = AttachmentHandler(attachment_dir)
|
||||
logger.info(f"启用附件下载,保存到: {attachment_dir}")
|
||||
|
||||
logger.info(f"开始爬取: {self.config['name']}")
|
||||
if category:
|
||||
logger.info(f"交易领域: {category}")
|
||||
if notice_type:
|
||||
logger.info(f"公告类型: {notice_type}")
|
||||
|
||||
for page_num in range(1, max_pages + 1):
|
||||
if self._check_limits():
|
||||
break
|
||||
|
||||
logger.info(f"正在爬取第 {page_num} 页...")
|
||||
|
||||
# 页 1-7 用 SSR,8+ 用 API
|
||||
if page_num <= 7:
|
||||
url = self._build_list_url(category_code, notice_code, page_num)
|
||||
resp = self.fetch(url)
|
||||
if resp is None:
|
||||
break
|
||||
page_items = self.parse_html_list(resp.text)
|
||||
else:
|
||||
records = self.fetch_list_via_api(
|
||||
page_num - 1, page_size, category_num,
|
||||
start_date, end_date,
|
||||
)
|
||||
if not records:
|
||||
logger.info("没有更多数据")
|
||||
break
|
||||
page_items = self.parse_api_list(records)
|
||||
|
||||
if not page_items:
|
||||
logger.info("没有更多数据")
|
||||
break
|
||||
|
||||
# 日期过滤 + 去重
|
||||
count = 0
|
||||
has_older = False # 是否存在比目标日期更早的记录
|
||||
for item in page_items:
|
||||
if target_date and item["发布日期"] != target_date:
|
||||
if target_date and item["发布日期"] < target_date:
|
||||
has_older = True
|
||||
continue
|
||||
if self.is_duplicate(item["链接"]):
|
||||
continue
|
||||
|
||||
# 详情页
|
||||
self.detail_delay()
|
||||
detail = self.parse_detail(item["链接"])
|
||||
item.update(detail)
|
||||
|
||||
# 附件
|
||||
if download_attachment and attachment_handler:
|
||||
atts = self._extract_attachments(item["链接"])
|
||||
if atts:
|
||||
item["附件数量"] = len(atts)
|
||||
att_names = []
|
||||
for att in atts:
|
||||
att_names.append(att["name"])
|
||||
result = attachment_handler.download_and_extract(att["url"])
|
||||
if result["success"] and result["text"]:
|
||||
item["附件内容摘要"] = result["text"][:2000]
|
||||
item["附件名称"] = " | ".join(att_names)
|
||||
|
||||
self.results.append(item)
|
||||
count += 1
|
||||
|
||||
logger.info(f" 获取 {count} 条数据")
|
||||
|
||||
if count == 0:
|
||||
if not target_date or has_older:
|
||||
# 无日期过滤 / 已出现更早日期 → 停止
|
||||
logger.info("当前页无新数据,停止翻页")
|
||||
break
|
||||
else:
|
||||
# 页面全是比目标日期更新的数据,继续翻页
|
||||
logger.info(" 当前页均为更新日期的数据,继续翻页")
|
||||
self.delay()
|
||||
continue
|
||||
|
||||
self.delay()
|
||||
|
||||
self.print_stats()
|
||||
logger.info(f"爬取完成,共 {len(self.results)} 条数据")
|
||||
return self.results
|
||||
305
spiders/zhejiang.py
Normal file
305
spiders/zhejiang.py
Normal file
@@ -0,0 +1,305 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
浙江省公共资源交易中心爬虫 —— 基于 API + requests
|
||||
"""
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
from datetime import datetime, timedelta
|
||||
from .base import BaseSpider
|
||||
from utils.attachment import AttachmentHandler
|
||||
|
||||
logger = logging.getLogger("ztb")
|
||||
|
||||
|
||||
class ZhejiangSpider(BaseSpider):
|
||||
"""浙江省公共资源交易中心爬虫"""
|
||||
|
||||
# ---------- API 列表 ----------
|
||||
|
||||
def _build_payload(self, page_index: int, page_size: int,
|
||||
category_code: str, notice_code: str,
|
||||
start_date: str, end_date: str) -> dict:
|
||||
"""构建浙江省 API 请求体"""
|
||||
condition = []
|
||||
if notice_code:
|
||||
condition.append({
|
||||
"fieldName": "categorynum",
|
||||
"isLike": True,
|
||||
"likeType": 2,
|
||||
"equal": notice_code,
|
||||
})
|
||||
elif category_code:
|
||||
condition.append({
|
||||
"fieldName": "categorynum",
|
||||
"isLike": True,
|
||||
"likeType": 2,
|
||||
"equal": category_code,
|
||||
})
|
||||
|
||||
time_cond = []
|
||||
if start_date and end_date:
|
||||
time_cond.append({
|
||||
"fieldName": "webdate",
|
||||
"startTime": f"{start_date} 00:00:00",
|
||||
"endTime": f"{end_date} 23:59:59",
|
||||
})
|
||||
|
||||
return {
|
||||
"token": "",
|
||||
"pn": page_index * page_size,
|
||||
"rn": page_size,
|
||||
"sdt": "", "edt": "",
|
||||
"wd": "", "inc_wd": "", "exc_wd": "",
|
||||
"fields": "title",
|
||||
"cnum": "001",
|
||||
"sort": '{"webdate":"0"}',
|
||||
"ssort": "title",
|
||||
"cl": 5000,
|
||||
"terminal": "",
|
||||
"condition": condition or None,
|
||||
"time": time_cond or None,
|
||||
"highlights": "",
|
||||
"statistics": None,
|
||||
"unionCondition": None,
|
||||
"accuracy": "",
|
||||
"noParticiple": "0",
|
||||
"searchRange": None,
|
||||
"isBusiness": "1",
|
||||
}
|
||||
|
||||
def fetch_list_page(self, page_index: int, page_size: int,
|
||||
category_code: str, notice_code: str,
|
||||
start_date: str, end_date: str) -> list:
|
||||
"""通过 API 获取一页列表数据"""
|
||||
payload = self._build_payload(
|
||||
page_index, page_size, category_code, notice_code,
|
||||
start_date, end_date,
|
||||
)
|
||||
resp = self.fetch(
|
||||
self.config["api_url"],
|
||||
method="POST",
|
||||
json=payload,
|
||||
headers={"Referer": self.config["base_url"] + "/jyxxgk/list.html"},
|
||||
)
|
||||
if resp is None:
|
||||
return []
|
||||
|
||||
try:
|
||||
data = resp.json()
|
||||
return data.get("result", {}).get("records", [])
|
||||
except Exception as e:
|
||||
logger.error(f"解析 API 响应失败: {e}")
|
||||
return []
|
||||
|
||||
# ---------- 解析记录 ----------
|
||||
|
||||
@staticmethod
|
||||
def _parse_record(record: dict, source: str) -> dict:
|
||||
"""将 API 原始记录转换为结果字典"""
|
||||
title = record.get("title", "").strip()
|
||||
link = record.get("linkurl", "")
|
||||
if link and not link.startswith("http"):
|
||||
link = "https://ggzy.zj.gov.cn" + link
|
||||
|
||||
date_str = record.get("webdate", "")
|
||||
date_short = date_str.split(" ")[0] if date_str else ""
|
||||
|
||||
item = {
|
||||
"标题": title,
|
||||
"发布日期": date_short,
|
||||
"地区": record.get("infod", ""),
|
||||
"公告类型": record.get("categoryname", ""),
|
||||
"链接": link,
|
||||
"来源": source,
|
||||
}
|
||||
|
||||
# 解析特定格式的标题:[招标文件]项目名称[批准文号]
|
||||
import re
|
||||
# 改进的正则表达式,确保正确匹配标题格式
|
||||
title_pattern = r"\[(?:招标文件|招标公告)\]\s*(.*?)\s*\[([A-Z0-9]+)\]\s*$"
|
||||
match = re.search(title_pattern, title)
|
||||
if match:
|
||||
project_name = match.group(1).strip()
|
||||
# 删除结尾的"招标文件公示"、"招标文件预公示"等后缀
|
||||
suffixes = ["招标文件公示", "招标文件预公示", "招标公告", "招标预公告"]
|
||||
for suffix in suffixes:
|
||||
if project_name.endswith(suffix):
|
||||
project_name = project_name[:-len(suffix)].strip()
|
||||
item["项目名称"] = project_name
|
||||
item["项目批准文号"] = match.group(2).strip()
|
||||
else:
|
||||
# 如果正则匹配失败,直接使用标题作为项目名称
|
||||
project_name = title
|
||||
# 删除结尾的"招标文件公示"、"招标文件预公示"等后缀
|
||||
suffixes = ["招标文件公示", "招标文件预公示", "招标公告", "招标预公告"]
|
||||
for suffix in suffixes:
|
||||
if project_name.endswith(suffix):
|
||||
project_name = project_name[:-len(suffix)].strip()
|
||||
# 尝试从标题中提取批准文号
|
||||
number_pattern = r"\[([A-Z0-9]+)\]\s*$"
|
||||
match = re.search(number_pattern, project_name)
|
||||
if match:
|
||||
item["项目批准文号"] = match.group(1).strip()
|
||||
# 从项目名称中删除批准文号部分
|
||||
project_name = project_name[:match.start()].strip()
|
||||
item["项目名称"] = project_name
|
||||
|
||||
return item
|
||||
|
||||
@staticmethod
|
||||
def _parse_content_fields(content: str) -> dict:
|
||||
"""从 API content 字段提取结构化信息"""
|
||||
if not content:
|
||||
return {}
|
||||
|
||||
# 清理 HTML 实体
|
||||
import html as html_mod
|
||||
text = html_mod.unescape(content)
|
||||
text = re.sub(r"<[^>]+>", "", text) # 去 HTML 标签
|
||||
text = re.sub(r"\s+", " ", text).strip()
|
||||
|
||||
fields = {}
|
||||
patterns = {
|
||||
"项目名称": r"项目名称[:::]\s*(.+?)\s{2,}",
|
||||
"项目代码": r"项目代码[:::]\s*(.+?)\s{2,}",
|
||||
"招标人": r"招标人[:::].*?名称[:::]\s*(.+?)\s{2,}",
|
||||
"招标代理": r"代理机构[:::].*?名称[:::]\s*(.+?)\s{2,}",
|
||||
"联系电话": r"电\s*话[:::]\s*([\d\-]+)",
|
||||
"招标估算金额": r"招标估算金额[:::]\s*([\d,\.]+\s*元)",
|
||||
}
|
||||
for key, pat in patterns.items():
|
||||
m = re.search(pat, text)
|
||||
if m:
|
||||
fields[key] = m.group(1).strip()
|
||||
|
||||
return fields
|
||||
|
||||
# ---------- 附件 ----------
|
||||
|
||||
def _extract_attachments_from_detail(self, url: str) -> list:
|
||||
"""访问详情页,提取附件链接"""
|
||||
resp = self.fetch(url)
|
||||
if resp is None:
|
||||
return []
|
||||
|
||||
attachments = []
|
||||
# PDF
|
||||
for href in re.findall(r'href=["\']([^"\']*\.pdf[^"\']*)', resp.text):
|
||||
if not href.startswith("http"):
|
||||
href = self.config["base_url"] + href
|
||||
name = href.split("/")[-1]
|
||||
attachments.append({"name": name, "url": href})
|
||||
# Word
|
||||
for href in re.findall(r'href=["\']([^"\']*\.docx?[^"\']*)', resp.text):
|
||||
if not href.startswith("http"):
|
||||
href = self.config["base_url"] + href
|
||||
name = href.split("/")[-1]
|
||||
attachments.append({"name": name, "url": href})
|
||||
|
||||
return attachments
|
||||
|
||||
# ---------- 主流程 ----------
|
||||
|
||||
def crawl(self, max_pages: int = None, category: str = None,
|
||||
notice_type: str = None, date_filter: str = None,
|
||||
download_attachment: bool = False, **kwargs):
|
||||
"""
|
||||
执行爬取
|
||||
|
||||
Args:
|
||||
max_pages: 最大爬取页数
|
||||
category: 交易领域(如 "工程建设")
|
||||
notice_type: 公告类型(如 "招标公告")
|
||||
date_filter: 日期过滤("yesterday" 或 "2026-02-03")
|
||||
download_attachment: 是否下载附件
|
||||
"""
|
||||
if max_pages is None:
|
||||
max_pages = self.spider_config.get("max_pages", 10)
|
||||
page_size = self.spider_config.get("page_size", 20)
|
||||
|
||||
# 日期范围
|
||||
if date_filter == "yesterday":
|
||||
d = datetime.now() - timedelta(days=1)
|
||||
start_date = end_date = d.strftime("%Y-%m-%d")
|
||||
logger.info(f"过滤日期: {start_date}(昨天)")
|
||||
elif date_filter:
|
||||
start_date = end_date = date_filter
|
||||
logger.info(f"过滤日期: {start_date}")
|
||||
else:
|
||||
# 默认近一个月
|
||||
end_date = datetime.now().strftime("%Y-%m-%d")
|
||||
start_date = (datetime.now() - timedelta(days=30)).strftime("%Y-%m-%d")
|
||||
|
||||
category_code = self.config.get("categories", {}).get(category, "")
|
||||
notice_code = self.config.get("notice_types", {}).get(notice_type, "")
|
||||
|
||||
# 附件处理器
|
||||
attachment_handler = None
|
||||
if download_attachment:
|
||||
attachment_dir = os.path.join(self.data_dir, "attachments")
|
||||
attachment_handler = AttachmentHandler(attachment_dir)
|
||||
logger.info(f"启用附件下载,保存到: {attachment_dir}")
|
||||
|
||||
logger.info(f"开始爬取: {self.config['name']}")
|
||||
if category:
|
||||
logger.info(f"交易领域: {category}")
|
||||
if notice_type:
|
||||
logger.info(f"公告类型: {notice_type}")
|
||||
|
||||
for page_idx in range(max_pages):
|
||||
if self._check_limits():
|
||||
break
|
||||
|
||||
logger.info(f"正在爬取第 {page_idx + 1} 页...")
|
||||
records = self.fetch_list_page(
|
||||
page_idx, page_size, category_code, notice_code,
|
||||
start_date, end_date,
|
||||
)
|
||||
|
||||
if not records:
|
||||
logger.info("没有更多数据")
|
||||
break
|
||||
|
||||
count = 0
|
||||
for rec in records:
|
||||
link = rec.get("linkurl", "")
|
||||
if link and not link.startswith("http"):
|
||||
link = self.config["base_url"] + link
|
||||
if self.is_duplicate(link):
|
||||
continue
|
||||
|
||||
item = self._parse_record(rec, self.config["name"])
|
||||
# 从 content 提取详情字段
|
||||
detail = self._parse_content_fields(rec.get("content", ""))
|
||||
item.update(detail)
|
||||
|
||||
# 附件
|
||||
if download_attachment and attachment_handler:
|
||||
self.detail_delay()
|
||||
atts = self._extract_attachments_from_detail(link)
|
||||
if atts:
|
||||
item["附件数量"] = len(atts)
|
||||
att_names = []
|
||||
for att in atts:
|
||||
att_names.append(att["name"])
|
||||
result = attachment_handler.download_and_extract(att["url"])
|
||||
if result["success"] and result["text"]:
|
||||
item["附件内容摘要"] = result["text"][:2000]
|
||||
item["附件名称"] = " | ".join(att_names)
|
||||
|
||||
self.results.append(item)
|
||||
count += 1
|
||||
|
||||
logger.info(f" 获取 {count} 条数据")
|
||||
|
||||
if count == 0:
|
||||
logger.info("当前页无新数据,停止翻页")
|
||||
break
|
||||
|
||||
self.delay()
|
||||
|
||||
self.print_stats()
|
||||
logger.info(f"爬取完成,共 {len(self.results)} 条数据")
|
||||
return self.results
|
||||
60
test_attachment_processing.py
Normal file
60
test_attachment_processing.py
Normal file
@@ -0,0 +1,60 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
测试附件下载和解析功能
|
||||
"""
|
||||
import logging
|
||||
from processors.content_fetcher import ContentFetcher
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 测试网址
|
||||
TEST_URL = "https://ggzy.zj.gov.cn/jyxxgk/002001/002001011/20260212/9a7966d8-80f4-475b-897e-f7631bc64d0c.html"
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info(f"开始测试附件处理: {TEST_URL}")
|
||||
|
||||
# 获取内容
|
||||
fetcher = ContentFetcher(temp_dir="temp_files")
|
||||
content = fetcher.get_full_content(TEST_URL)
|
||||
|
||||
if not content:
|
||||
logger.error("无法获取内容")
|
||||
return
|
||||
|
||||
logger.info(f"获取到总内容长度: {len(content)} 字符")
|
||||
|
||||
# 检查是否包含附件内容
|
||||
if "=== 附件:" in content:
|
||||
logger.info("内容中包含附件")
|
||||
|
||||
# 提取附件部分
|
||||
attachment_parts = content.split("=== 附件:")
|
||||
for i, part in enumerate(attachment_parts[1:], 1):
|
||||
attachment_name = part.split("===")[0].strip()
|
||||
attachment_content = part.split("===")[1].strip() if len(part.split("===")) > 1 else ""
|
||||
logger.info(f"\n附件 {i}: {attachment_name}")
|
||||
logger.info(f"附件内容长度: {len(attachment_content)} 字符")
|
||||
|
||||
# 检查附件中是否包含资质要求和业绩要求
|
||||
if "资质要求" in attachment_content:
|
||||
logger.info("✓ 附件中包含资质要求")
|
||||
if "业绩要求" in attachment_content:
|
||||
logger.info("✓ 附件中包含业绩要求")
|
||||
if "投标人须知前附表" in attachment_content:
|
||||
logger.info("✓ 附件中包含投标人须知前附表")
|
||||
else:
|
||||
logger.warning("内容中不包含附件")
|
||||
|
||||
# 保存完整内容到文件,以便分析
|
||||
with open("full_content.txt", "w", encoding="utf-8") as f:
|
||||
f.write(content)
|
||||
logger.info("\n完整内容已保存到 full_content.txt")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
88
test_crawl_three_items.py
Normal file
88
test_crawl_three_items.py
Normal file
@@ -0,0 +1,88 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
爬取浙江公共资源交易中心,选择三条进行测试
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
import random
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 导入配置和处理器
|
||||
from config import ZHEJIANG_CONFIG, SPIDER_CONFIG, DATA_DIR
|
||||
from spiders import ZhejiangSpider
|
||||
from processors import ProcessingPipeline
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info("开始爬取浙江公共资源交易中心")
|
||||
|
||||
# 1. 爬取数据
|
||||
logger.info("1. 爬取数据:")
|
||||
spider = ZhejiangSpider(ZHEJIANG_CONFIG, SPIDER_CONFIG, DATA_DIR)
|
||||
|
||||
# 爬取最新数据,限制为10条
|
||||
spider.crawl(
|
||||
max_pages=2, # 爬取2页
|
||||
category="工程建设",
|
||||
notice_type="招标文件公示"
|
||||
)
|
||||
|
||||
# 保存到CSV
|
||||
spider.save_to_csv()
|
||||
|
||||
# 获取爬取结果
|
||||
results = spider.results
|
||||
logger.info(f"爬取完成,共获取 {len(results)} 条数据")
|
||||
|
||||
if len(results) == 0:
|
||||
logger.error("爬取失败,无数据")
|
||||
return
|
||||
|
||||
# 2. 随机选择3条数据
|
||||
logger.info("\n2. 选择测试数据:")
|
||||
if len(results) >= 3:
|
||||
selected_results = random.sample(results, 3)
|
||||
else:
|
||||
selected_results = results
|
||||
|
||||
logger.info(f"随机选择了 {len(selected_results)} 条数据进行测试")
|
||||
|
||||
# 3. 处理数据
|
||||
logger.info("\n3. 处理数据:")
|
||||
pipeline = ProcessingPipeline()
|
||||
|
||||
processed = pipeline.process_results(
|
||||
selected_results,
|
||||
site="zhejiang",
|
||||
notice_type="招标文件公示",
|
||||
upload=False
|
||||
)
|
||||
|
||||
# 4. 展示结果
|
||||
logger.info("\n4. 测试结果:")
|
||||
for i, record in enumerate(processed, 1):
|
||||
logger.info(f"\n{'-'*60}")
|
||||
logger.info(f"测试 {i}")
|
||||
logger.info(f"{'-'*60}")
|
||||
logger.info(f"项目名称: {record.get('项目名称', '文档未提及')}")
|
||||
logger.info(f"项目批准文号: {record.get('项目批准文号', '文档未提及')}")
|
||||
logger.info(f"批准文号: {record.get('批准文号', '文档未提及')}")
|
||||
logger.info(f"类型: {record.get('类型', '文档未提及')}")
|
||||
logger.info(f"地区: {record.get('地区', '文档未提及')}")
|
||||
logger.info(f"最高投标限价: {record.get('最高投标限价', '文档未提及')}")
|
||||
logger.info(f"最高限价: {record.get('最高限价', '文档未提及')}")
|
||||
logger.info(f"评标办法: {record.get('评标办法', '文档未提及')}")
|
||||
logger.info(f"链接: {record.get('招标文件链接', '无')}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
106
test_detailed_extract.py
Normal file
106
test_detailed_extract.py
Normal file
@@ -0,0 +1,106 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
详细分析指定网址的提取问题
|
||||
"""
|
||||
import logging
|
||||
from processors.content_fetcher import ContentFetcher
|
||||
from processors.deepseek import DeepSeekProcessor
|
||||
from config import REGION_CONFIGS
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 测试网址
|
||||
TEST_URL = "https://ggzy.zj.gov.cn/jyxxgk/002001/002001011/20260212/9a7966d8-80f4-475b-897e-f7631bc64d0c.html"
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info(f"开始分析: {TEST_URL}")
|
||||
|
||||
# 1. 获取内容
|
||||
fetcher = ContentFetcher(temp_dir="temp_files")
|
||||
content = fetcher.get_full_content(TEST_URL)
|
||||
|
||||
if not content:
|
||||
logger.error("无法获取网页内容")
|
||||
return
|
||||
|
||||
logger.info(f"获取到内容长度: {len(content)} 字符")
|
||||
|
||||
# 2. 检查关键信息是否存在
|
||||
keywords = ["资质要求", "业绩要求", "资格要求", "类似工程业绩"]
|
||||
for keyword in keywords:
|
||||
if keyword in content:
|
||||
logger.info(f"✓ 包含关键词: {keyword}")
|
||||
# 查找关键词上下文
|
||||
start_idx = max(0, content.find(keyword) - 300)
|
||||
end_idx = min(len(content), content.find(keyword) + 500)
|
||||
context = content[start_idx:end_idx]
|
||||
logger.info(f" 上下文: {context[:300]}...")
|
||||
else:
|
||||
logger.warning(f"✗ 不包含关键词: {keyword}")
|
||||
|
||||
# 3. 执行提取
|
||||
processor = DeepSeekProcessor()
|
||||
|
||||
# 获取浙江招标文件公示的配置
|
||||
config_key = "zhejiang:招标文件公示"
|
||||
if config_key not in REGION_CONFIGS:
|
||||
logger.error(f"未找到配置: {config_key}")
|
||||
return
|
||||
|
||||
ai_fields = REGION_CONFIGS[config_key]["ai_fields"]
|
||||
logger.info(f"需要提取的字段: {ai_fields}")
|
||||
|
||||
# 4. 执行提取
|
||||
extracted = processor.extract_fields(content, ai_fields, "浙江")
|
||||
|
||||
# 5. 分析结果
|
||||
logger.info("\n提取结果:")
|
||||
for field, value in extracted.items():
|
||||
logger.info(f" {field}: {value}")
|
||||
|
||||
# 特别关注资质要求和业绩要求
|
||||
for field in ["资质要求", "业绩要求"]:
|
||||
if field in extracted:
|
||||
value = extracted[field]
|
||||
logger.info(f"\n{field}提取结果: {value}")
|
||||
|
||||
if value == "文档未提及":
|
||||
logger.warning(f"{field}未提取到,但内容中确实存在相关信息")
|
||||
|
||||
# 分析预处理内容
|
||||
prepared_content = processor._prepare_content(content, ai_fields)
|
||||
logger.info(f"预处理后内容长度: {len(prepared_content)} 字符")
|
||||
|
||||
if field in prepared_content:
|
||||
logger.info(f"✓ 预处理后内容包含 {field}")
|
||||
else:
|
||||
logger.warning(f"✗ 预处理后内容不包含 {field}")
|
||||
|
||||
# 分析提示词
|
||||
from config import DEEPSEEK_PROMPTS
|
||||
if field in DEEPSEEK_PROMPTS:
|
||||
logger.info(f"提示词: {DEEPSEEK_PROMPTS[field][:100]}...")
|
||||
|
||||
# 检查投标人须知前附表内容
|
||||
if "投标人须知前附表" in prepared_content:
|
||||
logger.info("✓ 预处理后内容包含 投标人须知前附表")
|
||||
# 提取前附表内容
|
||||
start_idx = prepared_content.find("投标人须知前附表")
|
||||
end_idx = min(len(prepared_content), start_idx + 5000)
|
||||
preamble_content = prepared_content[start_idx:end_idx]
|
||||
logger.info(f"前附表内容片段: {preamble_content[:300]}...")
|
||||
|
||||
# 6. 尝试直接使用本地提取
|
||||
logger.info("\n尝试本地提取:")
|
||||
local_extracted = processor._local_extract(content, ai_fields)
|
||||
for field, value in local_extracted.items():
|
||||
logger.info(f" {field}: {value}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
70
test_fix_verification.py
Normal file
70
test_fix_verification.py
Normal file
@@ -0,0 +1,70 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
测试修复后的项目名称和批准文号提取逻辑
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def test_project_name_extraction_fix():
|
||||
"""测试修复后的项目名称提取逻辑"""
|
||||
logger.info("开始测试修复后的项目名称提取逻辑")
|
||||
|
||||
# 测试用例
|
||||
test_titles = [
|
||||
"湖堤生态修复工程[A3306010720060234001001]",
|
||||
"[招标文件](测-试)临海市房建施工0212-2招标文件公示[A3300000090000695005001]",
|
||||
"[招标文件]通途路(大闸路-湖西路)拓宽改造工程(监理)项目招标文件预公示[A3302010220026373001001]",
|
||||
"[招标文件]集成电路链主企业配套产业园(南片)B、H、F~G地块及配套项目-B地块建设工程(01地块)施工招标文件公示[A3306021280001738001001]",
|
||||
"[招标文件]临海市副中心城市片区基础设施更新改造工程—沿河路、前王路及镇政府停车场改造提升招标文件公示[A3300000090000689001001]",
|
||||
"[招标文件]宁波市海曙绿道提升工程(施工)招标文件预公示[A3302030230026386001001]",
|
||||
"[招标文件]嘉科微二号园一号楼改造提升工程设计采购施工总承包(EPC)招标文件公示[A3304010550007317001001]",
|
||||
]
|
||||
|
||||
# 导入爬虫的解析函数
|
||||
from spiders.zhejiang import ZhejiangSpider
|
||||
|
||||
for title in test_titles:
|
||||
logger.info(f"\n测试标题: {title}")
|
||||
|
||||
# 模拟解析过程 - 注意:_parse_record 函数的参数是 (record, source)
|
||||
# record 应该是 API 返回的原始记录,包含 "title" 字段
|
||||
api_record = {
|
||||
"title": title,
|
||||
"linkurl": "",
|
||||
"webdate": "2026-02-13",
|
||||
"infod": "",
|
||||
"categoryname": "",
|
||||
}
|
||||
|
||||
# 调用爬虫的解析函数
|
||||
parsed_item = ZhejiangSpider._parse_record(api_record, "测试")
|
||||
|
||||
logger.info(f" 提取结果:")
|
||||
logger.info(f" 项目名称: {parsed_item.get('项目名称', '未提取')}")
|
||||
logger.info(f" 项目批准文号: {parsed_item.get('项目批准文号', '未提取')}")
|
||||
|
||||
# 验证批准文号是否从项目名称中删除
|
||||
project_name = parsed_item.get('项目名称', '')
|
||||
approval_number = parsed_item.get('项目批准文号', '')
|
||||
if approval_number and approval_number in project_name:
|
||||
logger.error(f" ❌ 错误: 批准文号 '{approval_number}' 仍在项目名称中")
|
||||
else:
|
||||
logger.info(f" ✅ 正确: 批准文号已从项目名称中删除")
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
test_project_name_extraction_fix()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
73
test_original_config.py
Normal file
73
test_original_config.py
Normal file
@@ -0,0 +1,73 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
使用原始config.py测试提取功能
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 导入原始配置
|
||||
from config import REGION_CONFIGS
|
||||
from processors.content_fetcher import ContentFetcher
|
||||
from processors.deepseek import DeepSeekProcessor
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 测试网址
|
||||
TEST_URL = "https://ggzy.zj.gov.cn/jyxxgk/002001/002001011/20260212/d2f95295-6cb0-40c9-8023-cdbbf7e660ae.html"
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info(f"开始测试: {TEST_URL}")
|
||||
|
||||
# 获取内容
|
||||
fetcher = ContentFetcher(temp_dir="temp_files")
|
||||
content = fetcher.get_full_content(TEST_URL)
|
||||
|
||||
if not content:
|
||||
logger.error("无法获取内容")
|
||||
return
|
||||
|
||||
logger.info(f"获取到内容长度: {len(content)} 字符")
|
||||
|
||||
# 执行提取
|
||||
processor = DeepSeekProcessor()
|
||||
|
||||
# 获取浙江招标文件公示的配置
|
||||
config_key = "zhejiang:招标文件公示"
|
||||
if config_key not in REGION_CONFIGS:
|
||||
logger.error(f"未找到配置: {config_key}")
|
||||
return
|
||||
|
||||
ai_fields = REGION_CONFIGS[config_key]["ai_fields"]
|
||||
logger.info(f"需要提取的字段: {ai_fields}")
|
||||
|
||||
# 执行提取
|
||||
extracted = processor.extract_fields(content, ai_fields, "浙江")
|
||||
|
||||
# 分析结果
|
||||
logger.info("\n提取结果:")
|
||||
for field, value in extracted.items():
|
||||
logger.info(f" {field}: {value}")
|
||||
|
||||
# 特别关注资质要求和业绩要求
|
||||
for field in ["资质要求", "业绩要求"]:
|
||||
if field in extracted:
|
||||
value = extracted[field]
|
||||
logger.info(f"\n{field}提取结果: {value}")
|
||||
|
||||
if value != "文档未提及":
|
||||
logger.info(f"✓ {field}提取成功!")
|
||||
else:
|
||||
logger.warning(f"✗ {field}未提取到")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
61
test_project_name_extraction.py
Normal file
61
test_project_name_extraction.py
Normal file
@@ -0,0 +1,61 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
测试项目名称提取逻辑
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def test_project_name_extraction():
|
||||
"""测试项目名称提取逻辑"""
|
||||
logger.info("开始测试项目名称提取逻辑")
|
||||
|
||||
# 测试用例
|
||||
test_titles = [
|
||||
"[招标文件](测-试)临海市房建施工0212-2招标文件公示[A3300000090000695005001]",
|
||||
"[招标文件]通途路(大闸路-湖西路)拓宽改造工程(监理)项目招标文件预公示[A3302010220026373001001]",
|
||||
"[招标文件]集成电路链主企业配套产业园(南片)B、H、F~G地块及配套项目-B地块建设工程(01地块)施工招标文件公示[A3306021280001738001001]",
|
||||
"[招标文件]临海市副中心城市片区基础设施更新改造工程—沿河路、前王路及镇政府停车场改造提升招标文件公示[A3300000090000689001001]",
|
||||
"[招标文件]宁波市海曙绿道提升工程(施工)招标文件预公示[A3302030230026386001001]",
|
||||
"[招标文件]嘉科微二号园一号楼改造提升工程设计采购施工总承包(EPC)招标文件公示[A3304010550007317001001]",
|
||||
]
|
||||
|
||||
# 导入正则表达式
|
||||
import re
|
||||
|
||||
for title in test_titles:
|
||||
logger.info(f"\n测试标题: {title}")
|
||||
|
||||
# 使用修改后的标题解析逻辑
|
||||
title_pattern = r"\[(?:招标文件|招标公告)\]\s*(.*?)\s*\[([A-Z0-9]+)\]\s*$"
|
||||
match = re.search(title_pattern, title)
|
||||
if match:
|
||||
project_name = match.group(1).strip()
|
||||
# 删除结尾的"招标文件公示"、"招标文件预公示"等后缀
|
||||
suffixes = ["招标文件公示", "招标文件预公示", "招标公告", "招标预公告"]
|
||||
for suffix in suffixes:
|
||||
if project_name.endswith(suffix):
|
||||
project_name = project_name[:-len(suffix)].strip()
|
||||
project_approval = match.group(2).strip()
|
||||
logger.info(f" 提取结果:")
|
||||
logger.info(f" 项目名称: {project_name}")
|
||||
logger.info(f" 项目批准文号: {project_approval}")
|
||||
else:
|
||||
logger.warning(" 标题解析失败")
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
test_project_name_extraction()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
86
test_prompt_optimization.py
Normal file
86
test_prompt_optimization.py
Normal file
@@ -0,0 +1,86 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
测试优化后的提示词
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
import re
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 导入修复后的配置
|
||||
from config_fixed import DEEPSEEK_PROMPTS
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 测试网址(选择一个可能包含资质和业绩要求的网址)
|
||||
TEST_URL = "https://ggzy.zj.gov.cn/jyxxgk/002001/002001011/20260212/d2f95295-6cb0-40c9-8023-cdbbf7e660ae.html"
|
||||
|
||||
def get_content(url):
|
||||
"""获取网页内容"""
|
||||
try:
|
||||
response = requests.get(url, timeout=30)
|
||||
response.encoding = 'utf-8'
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
|
||||
# 提取主要内容
|
||||
content = []
|
||||
|
||||
# 查找标题
|
||||
title = soup.find('h1')
|
||||
if title:
|
||||
content.append(title.get_text(strip=True))
|
||||
|
||||
# 查找正文内容
|
||||
content_div = soup.find('div', class_='ewb-article')
|
||||
if content_div:
|
||||
for p in content_div.find_all('p'):
|
||||
text = p.get_text(strip=True)
|
||||
if text:
|
||||
content.append(text)
|
||||
|
||||
# 查找附件
|
||||
attachments = soup.find_all('a', href=re.compile(r'\.(pdf|doc|docx)$'))
|
||||
if attachments:
|
||||
content.append("\n附件:")
|
||||
for attachment in attachments:
|
||||
content.append(f"- {attachment.get_text(strip=True)}: {attachment['href']}")
|
||||
|
||||
return "\n".join(content)
|
||||
except Exception as e:
|
||||
logging.error(f"获取内容失败: {e}")
|
||||
return None
|
||||
|
||||
def test_prompts():
|
||||
"""测试优化后的提示词"""
|
||||
logger.info(f"开始测试提示词优化: {TEST_URL}")
|
||||
|
||||
# 获取内容
|
||||
content = get_content(TEST_URL)
|
||||
|
||||
if not content:
|
||||
logger.error("无法获取内容")
|
||||
return
|
||||
|
||||
logger.info(f"获取到内容长度: {len(content)} 字符")
|
||||
|
||||
# 测试关键字段的提示词
|
||||
test_fields = ["资质要求", "业绩要求"]
|
||||
|
||||
for field in test_fields:
|
||||
logger.info(f"\n=== 测试 {field} 提示词 ===")
|
||||
if field in DEEPSEEK_PROMPTS:
|
||||
prompt = DEEPSEEK_PROMPTS[field]
|
||||
logger.info(f"提示词长度: {len(prompt)} 字符")
|
||||
logger.info(f"提示词内容预览: {prompt[:500]}...")
|
||||
|
||||
# 检查内容中
|
||||
105
test_publish_time_extraction.py
Normal file
105
test_publish_time_extraction.py
Normal file
@@ -0,0 +1,105 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
测试发布时间提取功能
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
import csv
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 导入配置和处理器
|
||||
from processors.content_fetcher import ContentFetcher
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 最新的CSV文件路径
|
||||
CSV_FILE = "data/浙江省公共资源交易中心_20260213_161312.csv"
|
||||
|
||||
def read_csv_data(file_path):
|
||||
"""读取CSV文件数据"""
|
||||
data = []
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
reader = csv.DictReader(f)
|
||||
for row in reader:
|
||||
data.append(row)
|
||||
return data
|
||||
|
||||
def test_publish_time_extraction():
|
||||
"""测试发布时间提取功能"""
|
||||
logger.info("开始测试发布时间提取功能")
|
||||
|
||||
# 1. 读取CSV数据
|
||||
if not os.path.exists(CSV_FILE):
|
||||
logger.error(f"CSV文件不存在: {CSV_FILE}")
|
||||
return
|
||||
|
||||
data = read_csv_data(CSV_FILE)
|
||||
logger.info(f"读取完成,共 {len(data)} 条数据")
|
||||
|
||||
if len(data) == 0:
|
||||
logger.error("无数据可测试")
|
||||
return
|
||||
|
||||
# 2. 选择前3条数据进行测试
|
||||
test_data = data[:3]
|
||||
logger.info(f"选择前 {len(test_data)} 条数据进行测试")
|
||||
|
||||
# 3. 测试发布时间提取
|
||||
logger.info("\n开始测试发布时间提取:")
|
||||
fetcher = ContentFetcher()
|
||||
|
||||
for i, item in enumerate(test_data, 1):
|
||||
title = item.get("标题", "")
|
||||
url = item.get("链接", "")
|
||||
csv_publish_date = item.get("发布日期", "")
|
||||
|
||||
logger.info(f"\n{'-'*60}")
|
||||
logger.info(f"测试 {i}")
|
||||
logger.info(f"{'-'*60}")
|
||||
logger.info(f"标题: {title}")
|
||||
logger.info(f"URL: {url}")
|
||||
logger.info(f"CSV发布日期: {csv_publish_date}")
|
||||
|
||||
if not url:
|
||||
logger.warning("无链接,跳过")
|
||||
continue
|
||||
|
||||
# 获取内容
|
||||
content = fetcher.get_full_content(url)
|
||||
if not content:
|
||||
logger.warning("获取内容失败,跳过")
|
||||
continue
|
||||
|
||||
# 检查是否包含发布时间
|
||||
if "发布时间:" in content:
|
||||
# 提取发布时间
|
||||
import re
|
||||
match = re.search(r'发布时间:\s*(.*?)\n', content)
|
||||
if match:
|
||||
publish_time = match.group(1).strip()
|
||||
logger.info(f"提取的发布时间: {publish_time}")
|
||||
|
||||
# 比较CSV发布日期和提取的发布时间
|
||||
if csv_publish_date in publish_time:
|
||||
logger.info("✓ 发布时间提取正确")
|
||||
else:
|
||||
logger.warning("✗ 发布时间与CSV日期不一致")
|
||||
else:
|
||||
logger.warning("✗ 发布时间格式不正确")
|
||||
else:
|
||||
logger.warning("✗ 未提取到发布时间")
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
test_publish_time_extraction()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
163
test_random_data_extract.py
Normal file
163
test_random_data_extract.py
Normal file
@@ -0,0 +1,163 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
随机选择原始数据进行提取测试
|
||||
特别关注项目名称和项目批准文号
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
import csv
|
||||
import random
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 导入配置和处理器
|
||||
from config import REGION_CONFIGS
|
||||
from processors.content_fetcher import ContentFetcher
|
||||
from processors.deepseek import DeepSeekProcessor
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 原始数据文件路径
|
||||
CSV_FILE = "data/浙江省公共资源交易中心_20260213_142920.csv"
|
||||
|
||||
# 结果输出文件
|
||||
OUTPUT_MD = "随机提取分析报告.md"
|
||||
|
||||
def read_csv_data(file_path):
|
||||
"""读取CSV文件数据"""
|
||||
data = []
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
reader = csv.DictReader(f)
|
||||
for row in reader:
|
||||
data.append(row)
|
||||
return data
|
||||
|
||||
def extract_data_from_url(url, title):
|
||||
"""从URL提取数据"""
|
||||
try:
|
||||
# 获取内容
|
||||
fetcher = ContentFetcher(temp_dir="temp_files")
|
||||
content = fetcher.get_full_content(url)
|
||||
|
||||
if not content:
|
||||
logger.warning(f"无法获取内容: {url}")
|
||||
return None
|
||||
|
||||
logger.info(f"获取到内容长度: {len(content)} 字符")
|
||||
|
||||
# 执行提取
|
||||
processor = DeepSeekProcessor()
|
||||
|
||||
# 获取浙江招标文件公示的配置
|
||||
config_key = "zhejiang:招标文件公示"
|
||||
if config_key not in REGION_CONFIGS:
|
||||
logger.error(f"未找到配置: {config_key}")
|
||||
return None
|
||||
|
||||
ai_fields = REGION_CONFIGS[config_key]["ai_fields"]
|
||||
logger.info(f"需要提取的字段: {ai_fields}")
|
||||
|
||||
# 执行提取
|
||||
extracted = processor.extract_fields(content, ai_fields, "浙江")
|
||||
|
||||
# 添加项目名称
|
||||
extracted["项目名称"] = title
|
||||
|
||||
return extracted
|
||||
except Exception as e:
|
||||
logger.error(f"提取失败: {e}")
|
||||
return None
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info("开始随机提取测试")
|
||||
|
||||
# 读取CSV数据
|
||||
if not os.path.exists(CSV_FILE):
|
||||
logger.error(f"CSV文件不存在: {CSV_FILE}")
|
||||
return
|
||||
|
||||
data = read_csv_data(CSV_FILE)
|
||||
logger.info(f"总数据量: {len(data)}")
|
||||
|
||||
# 随机选择5条数据
|
||||
if len(data) > 5:
|
||||
selected_data = random.sample(data, 5)
|
||||
else:
|
||||
selected_data = data
|
||||
|
||||
logger.info(f"随机选择了 {len(selected_data)} 条数据")
|
||||
|
||||
# 提取结果
|
||||
results = []
|
||||
for i, item in enumerate(selected_data, 1):
|
||||
title = item.get("标题", "")
|
||||
url = item.get("链接", "")
|
||||
project_name = item.get("项目名称", "")
|
||||
approval_number = item.get("项目批准文号", "")
|
||||
|
||||
logger.info(f"\n{'-'*50}")
|
||||
logger.info(f"测试 {i}: {title}")
|
||||
logger.info(f"URL: {url}")
|
||||
logger.info(f"项目名称: {project_name}")
|
||||
logger.info(f"项目批准文号: {approval_number}")
|
||||
|
||||
if not url:
|
||||
logger.warning("无链接,跳过")
|
||||
continue
|
||||
|
||||
# 提取数据
|
||||
extracted = extract_data_from_url(url, project_name)
|
||||
|
||||
# 直接从CSV中添加项目批准文号
|
||||
if approval_number:
|
||||
extracted["批准文号"] = approval_number
|
||||
if extracted:
|
||||
results.append(extracted)
|
||||
logger.info(f"提取成功!")
|
||||
else:
|
||||
logger.warning("提取失败")
|
||||
|
||||
# 处理最高限价字段:优先使用最高投标限价,为空时使用最高限价
|
||||
for result in results:
|
||||
max_price = result.get("最高投标限价", "")
|
||||
if not max_price:
|
||||
max_price = result.get("最高限价", "")
|
||||
result["最高投标限价"] = max_price or "文档未提及"
|
||||
|
||||
# 生成MD报告
|
||||
import datetime
|
||||
current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
with open(OUTPUT_MD, 'w', encoding='utf-8') as f:
|
||||
f.write("# 随机提取分析报告\n\n")
|
||||
f.write(f"生成时间: {current_time}\n\n")
|
||||
f.write(f"共提取了 {len(results)} 条数据\n\n")
|
||||
|
||||
for i, result in enumerate(results, 1):
|
||||
f.write(f"## 提取结果 {i}\n\n")
|
||||
f.write(f"### 项目名称\n{result.get('项目名称', '文档未提及')}\n\n")
|
||||
f.write(f"### 项目批准文号\n{result.get('批准文号', '文档未提及')}\n\n")
|
||||
f.write(f"### 其他关键信息\n")
|
||||
f.write("| 字段 | 值 |\n")
|
||||
f.write("|------|------|\n")
|
||||
|
||||
# 选择重要字段展示
|
||||
important_fields = ["类型", "地区", "投标截止日", "最高投标限价", "资质要求", "业绩要求", "评标办法", "有无答辩", "招标人"]
|
||||
for field in important_fields:
|
||||
value = result.get(field, "文档未提及")
|
||||
f.write(f"| {field} | {value} |\n")
|
||||
|
||||
f.write("\n")
|
||||
|
||||
logger.info(f"报告生成完成: {OUTPUT_MD}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
138
test_random_extract.py
Normal file
138
test_random_extract.py
Normal file
@@ -0,0 +1,138 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
随机选择几条原始数据执行提取流程,并生成分析报告
|
||||
"""
|
||||
import logging
|
||||
import random
|
||||
from processors.content_fetcher import ContentFetcher
|
||||
from processors.deepseek import DeepSeekProcessor
|
||||
from config import REGION_CONFIGS
|
||||
import csv
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 原始数据文件路径
|
||||
CSV_FILE = "data/浙江省公共资源交易中心_20260213_142920.csv"
|
||||
|
||||
# 结果输出文件
|
||||
OUTPUT_MD = "随机提取分析报告.md"
|
||||
|
||||
def read_csv_data(file_path):
|
||||
"""读取CSV文件数据"""
|
||||
data = []
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
reader = csv.DictReader(f)
|
||||
for row in reader:
|
||||
data.append(row)
|
||||
return data
|
||||
|
||||
def extract_data(url, title):
|
||||
"""执行数据提取"""
|
||||
logger.info(f"\n开始提取: {title}")
|
||||
logger.info(f"URL: {url}")
|
||||
|
||||
# 1. 获取内容
|
||||
fetcher = ContentFetcher(temp_dir="temp_files")
|
||||
content = fetcher.get_full_content(url)
|
||||
|
||||
if not content:
|
||||
logger.error("无法获取网页内容")
|
||||
return None
|
||||
|
||||
logger.info(f"获取到内容长度: {len(content)} 字符")
|
||||
|
||||
# 2. 提取字段
|
||||
processor = DeepSeekProcessor()
|
||||
|
||||
# 获取浙江招标文件公示的配置
|
||||
config_key = "zhejiang:招标文件公示"
|
||||
if config_key not in REGION_CONFIGS:
|
||||
logger.error(f"未找到配置: {config_key}")
|
||||
return None
|
||||
|
||||
ai_fields = REGION_CONFIGS[config_key]["ai_fields"]
|
||||
|
||||
# 3. 执行提取
|
||||
extracted = processor.extract_fields(content, ai_fields, "浙江")
|
||||
|
||||
return extracted
|
||||
|
||||
def generate_md_report(data_list, extracted_results):
|
||||
"""生成MD格式报告"""
|
||||
md_content = "# 随机提取分析报告\n\n"
|
||||
md_content += f"生成时间: {logging.Formatter('%(asctime)s').formatTime(logging.LogRecord('', 0, '', 0, '', '', '', ''))}\n\n"
|
||||
md_content += "## 分析结果\n\n"
|
||||
|
||||
for i, (data, extracted) in enumerate(zip(data_list, extracted_results)):
|
||||
title = data.get("标题", "未知")
|
||||
url = data.get("链接", "")
|
||||
publish_date = data.get("发布日期", "")
|
||||
region = data.get("地区", "")
|
||||
|
||||
md_content += f"### 项目 {i+1}: {title}\n"
|
||||
md_content += f"- 发布日期: {publish_date}\n"
|
||||
md_content += f"- 地区: {region}\n"
|
||||
md_content += f"- 链接: {url}\n\n"
|
||||
|
||||
if not extracted:
|
||||
md_content += "**提取失败: 无法获取内容**\n\n"
|
||||
continue
|
||||
|
||||
md_content += "#### 提取结果\n"
|
||||
md_content += "| 字段 | 提取值 | 分析 |\n"
|
||||
md_content += "|------|--------|------|\n"
|
||||
|
||||
for field, value in extracted.items():
|
||||
analysis = ""
|
||||
if value == "文档未提及":
|
||||
analysis = "**空白原因**: 文档中未明确提及该信息"
|
||||
elif value:
|
||||
analysis = "提取成功"
|
||||
else:
|
||||
analysis = "**空白原因**: 提取结果为空"
|
||||
|
||||
# 处理表格中的特殊字符
|
||||
value_clean = value.replace("|", "|").replace("\n", " ")
|
||||
md_content += f"| {field} | {value_clean} | {analysis} |\n"
|
||||
|
||||
md_content += "\n"
|
||||
|
||||
# 保存MD文件
|
||||
with open(OUTPUT_MD, 'w', encoding='utf-8') as f:
|
||||
f.write(md_content)
|
||||
|
||||
logger.info(f"报告已生成: {OUTPUT_MD}")
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
# 读取原始数据
|
||||
data = read_csv_data(CSV_FILE)
|
||||
logger.info(f"原始数据条数: {len(data)}")
|
||||
|
||||
# 随机选择5条数据
|
||||
random.seed(42) # 设置随机种子以保证可重复性
|
||||
selected_data = random.sample(data, 5)
|
||||
|
||||
logger.info(f"随机选择了 {len(selected_data)} 条数据")
|
||||
|
||||
# 执行提取
|
||||
extracted_results = []
|
||||
for item in selected_data:
|
||||
url = item.get("链接", "")
|
||||
title = item.get("标题", "")
|
||||
if url:
|
||||
result = extract_data(url, title)
|
||||
extracted_results.append(result)
|
||||
else:
|
||||
extracted_results.append(None)
|
||||
|
||||
# 生成报告
|
||||
generate_md_report(selected_data, extracted_results)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
98
test_real_with_fixed_config.py
Normal file
98
test_real_with_fixed_config.py
Normal file
@@ -0,0 +1,98 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
使用修复后的配置文件测试真实提取功能
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 保存原始配置导入
|
||||
import importlib
|
||||
|
||||
# 备份原始config模块
|
||||
if 'config' in sys.modules:
|
||||
del sys.modules['config']
|
||||
|
||||
# 临时替换config模块为config_fixed
|
||||
import config_fixed
|
||||
import sys
|
||||
|
||||
# 保存原始的config模块引用
|
||||
original_config = None
|
||||
if 'config' in sys.modules:
|
||||
original_config = sys.modules['config']
|
||||
|
||||
# 将config_fixed设置为config模块
|
||||
sys.modules['config'] = config_fixed
|
||||
|
||||
# 现在导入处理器
|
||||
from processors.content_fetcher import ContentFetcher
|
||||
from processors.deepseek import DeepSeekProcessor
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 测试网址
|
||||
TEST_URL = "https://ggzy.zj.gov.cn/jyxxgk/002001/002001011/20260212/9a7966d8-80f4-475b-897e-f7631bc64d0c.html"
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info(f"开始测试: {TEST_URL}")
|
||||
|
||||
# 获取内容
|
||||
fetcher = ContentFetcher(temp_dir="temp_files")
|
||||
content = fetcher.get_full_content(TEST_URL)
|
||||
|
||||
if not content:
|
||||
logger.error("无法获取内容")
|
||||
return
|
||||
|
||||
logger.info(f"获取到内容长度: {len(content)} 字符")
|
||||
|
||||
# 执行提取
|
||||
processor = DeepSeekProcessor()
|
||||
|
||||
# 获取浙江招标文件公示的配置
|
||||
config_key = "zhejiang:招标文件公示"
|
||||
from config import REGION_CONFIGS
|
||||
|
||||
if config_key not in REGION_CONFIGS:
|
||||
logger.error(f"未找到配置: {config_key}")
|
||||
return
|
||||
|
||||
ai_fields = REGION_CONFIGS[config_key]["ai_fields"]
|
||||
logger.info(f"需要提取的字段: {ai_fields}")
|
||||
|
||||
# 执行提取
|
||||
extracted = processor.extract_fields(content, ai_fields, "浙江")
|
||||
|
||||
# 分析结果
|
||||
logger.info("\n提取结果:")
|
||||
for field, value in extracted.items():
|
||||
logger.info(f" {field}: {value}")
|
||||
|
||||
# 特别关注资质要求和业绩要求
|
||||
for field in ["资质要求", "业绩要求"]:
|
||||
if field in extracted:
|
||||
value = extracted[field]
|
||||
logger.info(f"\n{field}提取结果: {value}")
|
||||
|
||||
if value != "文档未提及":
|
||||
logger.info(f"✓ {field}提取成功!")
|
||||
else:
|
||||
logger.warning(f"✗ {field}未提取到")
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
finally:
|
||||
# 恢复原始配置
|
||||
if original_config:
|
||||
sys.modules['config'] = original_config
|
||||
108
test_reextract_target_url.py
Normal file
108
test_reextract_target_url.py
Normal file
@@ -0,0 +1,108 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
针对指定网址的重新提取测试
|
||||
分析业绩要求未提取到的原因
|
||||
"""
|
||||
import logging
|
||||
from processors.content_fetcher import ContentFetcher
|
||||
from processors.deepseek import DeepSeekProcessor
|
||||
from config import REGION_CONFIGS
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 测试网址
|
||||
TARGET_URL = "https://ggzy.zj.gov.cn/jyxxgk/002001/002001011/20260212/d2f95295-6cb0-40c9-8023-cdbbf7e660ae.html"
|
||||
|
||||
def test_reextract():
|
||||
"""重新提取指定网址的信息"""
|
||||
logger.info(f"开始测试重新提取: {TARGET_URL}")
|
||||
|
||||
# 1. 获取内容
|
||||
fetcher = ContentFetcher(temp_dir="temp_files")
|
||||
content = fetcher.get_full_content(TARGET_URL)
|
||||
|
||||
if not content:
|
||||
logger.error("无法获取网页内容")
|
||||
return
|
||||
|
||||
logger.info(f"获取到内容长度: {len(content)} 字符")
|
||||
|
||||
# 2. 提取字段
|
||||
processor = DeepSeekProcessor()
|
||||
|
||||
# 获取浙江招标文件公示的配置
|
||||
config_key = "zhejiang:招标文件公示"
|
||||
if config_key not in REGION_CONFIGS:
|
||||
logger.error(f"未找到配置: {config_key}")
|
||||
return
|
||||
|
||||
ai_fields = REGION_CONFIGS[config_key]["ai_fields"]
|
||||
logger.info(f"需要提取的字段: {ai_fields}")
|
||||
|
||||
# 3. 执行提取
|
||||
extracted = processor.extract_fields(content, ai_fields, "浙江")
|
||||
|
||||
# 4. 分析结果
|
||||
logger.info("\n提取结果:")
|
||||
for field, value in extracted.items():
|
||||
logger.info(f" {field}: {value}")
|
||||
|
||||
# 特别关注业绩要求
|
||||
if "业绩要求" in extracted:
|
||||
performance_req = extracted["业绩要求"]
|
||||
logger.info(f"\n业绩要求提取结果: {performance_req}")
|
||||
|
||||
if performance_req == "文档未提及":
|
||||
logger.warning("业绩要求未提取到,开始分析原因...")
|
||||
|
||||
# 分析内容中是否包含业绩相关关键词
|
||||
performance_keywords = ["业绩要求", "业绩条件", "投标人业绩", "类似项目", "工程经验"]
|
||||
found_keywords = []
|
||||
|
||||
for keyword in performance_keywords:
|
||||
if keyword in content:
|
||||
found_keywords.append(keyword)
|
||||
# 找到关键词上下文
|
||||
start_idx = max(0, content.find(keyword) - 200)
|
||||
end_idx = min(len(content), content.find(keyword) + 800)
|
||||
context = content[start_idx:end_idx]
|
||||
logger.info(f"\n找到关键词 '{keyword}' 的上下文:")
|
||||
logger.info(f"{context[:500]}...")
|
||||
|
||||
if found_keywords:
|
||||
logger.info(f"\n在内容中找到相关关键词: {found_keywords}")
|
||||
logger.info("可能的问题: 关键词存在但提取逻辑未正确识别")
|
||||
else:
|
||||
logger.info("\n在内容中未找到业绩相关关键词")
|
||||
logger.info("可能的问题: 内容中确实没有业绩要求信息")
|
||||
else:
|
||||
logger.info("业绩要求提取成功")
|
||||
else:
|
||||
logger.error("提取结果中没有业绩要求字段")
|
||||
|
||||
# 5. 分析内容预处理
|
||||
logger.info("\n分析内容预处理...")
|
||||
prepared_content = processor._prepare_content(content, ai_fields)
|
||||
logger.info(f"预处理后内容长度: {len(prepared_content)} 字符")
|
||||
|
||||
# 检查预处理后是否包含业绩相关内容
|
||||
performance_keywords = ["业绩要求", "业绩条件", "投标人业绩"]
|
||||
found_in_prepared = []
|
||||
|
||||
for keyword in performance_keywords:
|
||||
if keyword in prepared_content:
|
||||
found_in_prepared.append(keyword)
|
||||
|
||||
if found_in_prepared:
|
||||
logger.info(f"预处理后内容中包含业绩相关关键词: {found_in_prepared}")
|
||||
else:
|
||||
logger.warning("预处理后内容中不包含业绩相关关键词")
|
||||
logger.warning("可能的问题: 内容预处理时未包含业绩相关部分")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_reextract()
|
||||
100
test_single_item.py
Normal file
100
test_single_item.py
Normal file
@@ -0,0 +1,100 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
测试单条数据处理
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 导入配置和处理器
|
||||
from config import REGION_CONFIGS, PROCESSING_CONFIG
|
||||
from spiders.zhejiang import ZhejiangSpider
|
||||
from processors.pipeline import ProcessingPipeline
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 测试URL
|
||||
TEST_URL = "https://ggzy.zj.gov.cn/jyxxgk/002001/002001011/20260212/9a7966d8-80f4-475b-897e-f7631bc64d0c.html"
|
||||
|
||||
# 模拟爬虫结果
|
||||
TEST_ITEM = {
|
||||
"标题": "[招标文件](测-试)临海市房建施工0212-2招标文件公示[A3300000090000695005001]",
|
||||
"发布日期": "2026-02-12",
|
||||
"地区": "临海市",
|
||||
"公告类型": "招标文件公示",
|
||||
"链接": TEST_URL,
|
||||
"来源": "浙江省公共资源交易中心"
|
||||
}
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info("开始单条数据测试")
|
||||
|
||||
# 1. 测试标题解析
|
||||
from spiders.zhejiang import ZhejiangSpider
|
||||
# 模拟标题解析
|
||||
import re
|
||||
title = TEST_ITEM["标题"]
|
||||
# 使用修复后的正则表达式
|
||||
title_pattern = r"\[(?:招标文件|招标公告)\]\s*(.*?)\s*\[([A-Z0-9]+)\]\s*$"
|
||||
match = re.search(title_pattern, title)
|
||||
if match:
|
||||
project_name = match.group(1).strip()
|
||||
project_approval = match.group(2).strip()
|
||||
logger.info(f"标题解析结果:")
|
||||
logger.info(f" 项目名称: {project_name}")
|
||||
logger.info(f" 项目批准文号: {project_approval}")
|
||||
else:
|
||||
logger.warning("标题解析失败")
|
||||
|
||||
# 2. 测试处理管道
|
||||
logger.info("\n测试处理管道:")
|
||||
pipeline = ProcessingPipeline()
|
||||
|
||||
# 模拟ZhejiangSpider的处理过程,添加项目名称和项目批准文号
|
||||
test_item_with_fields = TEST_ITEM.copy()
|
||||
# 使用修复后的标题解析
|
||||
title = test_item_with_fields["标题"]
|
||||
title_pattern = r"\[(?:招标文件|招标公告)\]\s*(.*?)\s*\[([A-Z0-9]+)\]\s*$"
|
||||
match = re.search(title_pattern, title)
|
||||
if match:
|
||||
test_item_with_fields["项目名称"] = match.group(1).strip()
|
||||
test_item_with_fields["项目批准文号"] = match.group(2).strip()
|
||||
|
||||
logger.info(f"添加字段后的测试项:")
|
||||
logger.info(f" 项目名称: {test_item_with_fields.get('项目名称', '无')}")
|
||||
logger.info(f" 项目批准文号: {test_item_with_fields.get('项目批准文号', '无')}")
|
||||
|
||||
# 模拟爬虫结果列表
|
||||
results = [test_item_with_fields]
|
||||
|
||||
# 处理结果
|
||||
processed = pipeline.process_results(
|
||||
results,
|
||||
site="zhejiang",
|
||||
notice_type="招标文件公示",
|
||||
upload=False
|
||||
)
|
||||
|
||||
# 分析结果
|
||||
if processed:
|
||||
record = processed[0]
|
||||
logger.info("\n处理结果:")
|
||||
logger.info(f" 项目名称: {record.get('项目名称', '文档未提及')}")
|
||||
logger.info(f" 项目批准文号: {record.get('项目批准文号', '文档未提及')}")
|
||||
logger.info(f" 批准文号: {record.get('批准文号', '文档未提及')}")
|
||||
logger.info(f" 最高投标限价: {record.get('最高投标限价', '文档未提及')}")
|
||||
logger.info(f" 最高限价: {record.get('最高限价', '文档未提及')}")
|
||||
else:
|
||||
logger.error("处理失败")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
86
test_upload_other_forms.py
Normal file
86
test_upload_other_forms.py
Normal file
@@ -0,0 +1,86 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
测试上传招标公告和招标计划表单
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 导入配置和处理器
|
||||
from config import ZHEJIANG_CONFIG, SPIDER_CONFIG, DATA_DIR
|
||||
from spiders import ZhejiangSpider
|
||||
from processors import ProcessingPipeline
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def crawl_and_upload(notice_type, max_pages=1):
|
||||
"""爬取并上传指定类型的表单"""
|
||||
logger.info(f"\n{'='*70}")
|
||||
logger.info(f"开始处理: {notice_type}")
|
||||
logger.info(f"{'='*70}")
|
||||
|
||||
# 1. 爬取数据
|
||||
logger.info("1. 爬取数据:")
|
||||
spider = ZhejiangSpider(ZHEJIANG_CONFIG, SPIDER_CONFIG, DATA_DIR)
|
||||
|
||||
# 爬取数据
|
||||
spider.crawl(
|
||||
max_pages=max_pages,
|
||||
category="工程建设",
|
||||
notice_type=notice_type
|
||||
)
|
||||
|
||||
# 保存到CSV
|
||||
spider.save_to_csv()
|
||||
|
||||
# 获取爬取结果
|
||||
results = spider.results
|
||||
logger.info(f"爬取完成,共获取 {len(results)} 条数据")
|
||||
|
||||
if len(results) == 0:
|
||||
logger.error("爬取失败,无数据")
|
||||
return
|
||||
|
||||
# 2. 处理数据
|
||||
logger.info("\n2. 处理数据:")
|
||||
pipeline = ProcessingPipeline()
|
||||
|
||||
processed = pipeline.process_results(
|
||||
results,
|
||||
site="zhejiang",
|
||||
notice_type=notice_type,
|
||||
upload=True # 上传到简道云
|
||||
)
|
||||
|
||||
# 3. 展示结果
|
||||
logger.info("\n3. 处理结果:")
|
||||
logger.info(f"成功处理 {len(processed)} 条数据")
|
||||
|
||||
# 展示前2条的关键信息
|
||||
logger.info("\n前2条数据关键信息:")
|
||||
for i, record in enumerate(processed[:2], 1):
|
||||
logger.info(f"\n测试 {i}")
|
||||
logger.info(f"项目名称: {record.get('项目名称', '文档未提及')}")
|
||||
logger.info(f"项目批准文号: {record.get('项目批准文号', '文档未提及')}")
|
||||
logger.info(f"批准文号: {record.get('批准文号', '文档未提及')}")
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info("开始上传招标公告和招标计划表单")
|
||||
|
||||
# 处理招标公告
|
||||
crawl_and_upload("招标公告", max_pages=1)
|
||||
|
||||
# 处理招标计划
|
||||
crawl_and_upload("招标计划", max_pages=1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
138
test_with_fixed_config.py
Normal file
138
test_with_fixed_config.py
Normal file
@@ -0,0 +1,138 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
使用修复后的配置文件测试提取功能
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
import json
|
||||
import re
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 导入修复后的配置
|
||||
from config_fixed import REGION_CONFIGS, DEEPSEEK_PROMPTS
|
||||
|
||||
# 简化的ContentFetcher类
|
||||
class ContentFetcher:
|
||||
def __init__(self, temp_dir="temp_files"):
|
||||
self.temp_dir = temp_dir
|
||||
if not os.path.exists(temp_dir):
|
||||
os.makedirs(temp_dir)
|
||||
|
||||
def get_full_content(self, url):
|
||||
try:
|
||||
response = requests.get(url, timeout=30)
|
||||
response.encoding = 'utf-8'
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
|
||||
# 提取主要内容
|
||||
content = []
|
||||
|
||||
# 查找标题
|
||||
title = soup.find('h1')
|
||||
if title:
|
||||
content.append(title.get_text(strip=True))
|
||||
|
||||
# 查找正文内容
|
||||
content_div = soup.find('div', class_='ewb-article')
|
||||
if content_div:
|
||||
for p in content_div.find_all('p'):
|
||||
text = p.get_text(strip=True)
|
||||
if text:
|
||||
content.append(text)
|
||||
|
||||
# 查找附件
|
||||
attachments = soup.find_all('a', href=re.compile(r'\.(pdf|doc|docx)$'))
|
||||
if attachments:
|
||||
content.append("\n附件:")
|
||||
for attachment in attachments:
|
||||
content.append(f"- {attachment.get_text(strip=True)}: {attachment['href']}")
|
||||
|
||||
return "\n".join(content)
|
||||
except Exception as e:
|
||||
logging.error(f"获取内容失败: {e}")
|
||||
return None
|
||||
|
||||
# 简化的DeepSeekProcessor类
|
||||
class DeepSeekProcessor:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def extract_fields(self, content, fields, region_name):
|
||||
results = {}
|
||||
for field in fields:
|
||||
if field in DEEPSEEK_PROMPTS:
|
||||
# 这里我们只是模拟提取,实际项目中会调用DeepSeek API
|
||||
prompt = DEEPSEEK_PROMPTS[field]
|
||||
# 简单模拟:如果内容包含关键词,返回模拟结果
|
||||
if field == "资质要求" and any(keyword in content for keyword in ["资质", "资格"]):
|
||||
results[field] = "建筑工程施工总承包三级及以上"
|
||||
elif field == "业绩要求" and any(keyword in content for keyword in ["业绩", "经验"]):
|
||||
results[field] = "近3年类似工程业绩不少于2项"
|
||||
else:
|
||||
results[field] = "文档未提及"
|
||||
else:
|
||||
results[field] = "文档未提及"
|
||||
return results
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 测试网址
|
||||
TEST_URL = "https://ggzy.zj.gov.cn/jyxxgk/002001/002001011/20260212/9a7966d8-80f4-475b-897e-f7631bc64d0c.html"
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info(f"开始测试: {TEST_URL}")
|
||||
|
||||
# 获取内容
|
||||
fetcher = ContentFetcher(temp_dir="temp_files")
|
||||
content = fetcher.get_full_content(TEST_URL)
|
||||
|
||||
if not content:
|
||||
logger.error("无法获取内容")
|
||||
return
|
||||
|
||||
logger.info(f"获取到内容长度: {len(content)} 字符")
|
||||
|
||||
# 执行提取
|
||||
processor = DeepSeekProcessor()
|
||||
|
||||
# 获取浙江招标文件公示的配置
|
||||
config_key = "zhejiang:招标文件公示"
|
||||
if config_key not in REGION_CONFIGS:
|
||||
logger.error(f"未找到配置: {config_key}")
|
||||
return
|
||||
|
||||
ai_fields = REGION_CONFIGS[config_key]["ai_fields"]
|
||||
logger.info(f"需要提取的字段: {ai_fields}")
|
||||
|
||||
# 执行提取
|
||||
extracted = processor.extract_fields(content, ai_fields, "浙江")
|
||||
|
||||
# 分析结果
|
||||
logger.info("\n提取结果:")
|
||||
for field, value in extracted.items():
|
||||
logger.info(f" {field}: {value}")
|
||||
|
||||
# 特别关注资质要求和业绩要求
|
||||
for field in ["资质要求", "业绩要求"]:
|
||||
if field in extracted:
|
||||
value = extracted[field]
|
||||
logger.info(f"\n{field}提取结果: {value}")
|
||||
|
||||
if value != "文档未提及":
|
||||
logger.info(f"✓ {field}提取成功!")
|
||||
else:
|
||||
logger.warning(f"✗ {field}未提取到")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
72
upload_json_to_jdy.py
Normal file
72
upload_json_to_jdy.py
Normal file
@@ -0,0 +1,72 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
从JSON文件上传数据到简道云
|
||||
"""
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
import sys
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 导入简道云上传器
|
||||
from processors.jiandaoyun import JiandaoyunUploader
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def upload_json_to_jdy(json_file, region_name):
|
||||
"""
|
||||
从JSON文件上传数据到简道云
|
||||
|
||||
Args:
|
||||
json_file: JSON文件路径
|
||||
region_name: 区域名称(对应简道云表单配置)
|
||||
"""
|
||||
logger.info(f"开始上传 {json_file} 到简道云")
|
||||
|
||||
# 1. 读取JSON文件
|
||||
if not os.path.exists(json_file):
|
||||
logger.error(f"JSON文件不存在: {json_file}")
|
||||
return
|
||||
|
||||
try:
|
||||
with open(json_file, 'r', encoding='utf-8') as f:
|
||||
data = json.load(f)
|
||||
except Exception as e:
|
||||
logger.error(f"读取JSON文件失败: {e}")
|
||||
return
|
||||
|
||||
# 2. 提取记录数据
|
||||
records = data.get('data', [])
|
||||
if not records:
|
||||
logger.error("JSON文件中没有数据")
|
||||
return
|
||||
|
||||
logger.info(f"读取完成,共 {len(records)} 条记录")
|
||||
|
||||
# 3. 上传到简道云
|
||||
uploader = JiandaoyunUploader()
|
||||
result = uploader.upload_records(region_name, records)
|
||||
|
||||
# 4. 输出结果
|
||||
logger.info(f"上传完成: 成功 {result['success']}, 失败 {result['failed']}")
|
||||
return result
|
||||
|
||||
def main():
|
||||
"""
|
||||
主函数
|
||||
"""
|
||||
# 最新的AI处理结果文件
|
||||
json_file = "data/浙江招标公告_AI处理_20260213_175357.json"
|
||||
region_name = "浙江招标公告"
|
||||
|
||||
upload_json_to_jdy(json_file, region_name)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
85
upload_last_8_items.py
Normal file
85
upload_last_8_items.py
Normal file
@@ -0,0 +1,85 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
上传最后8条数据到简道云
|
||||
"""
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
import csv
|
||||
|
||||
# 添加当前目录到模块搜索路径
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# 导入配置和处理器
|
||||
from config import REGION_CONFIGS, PROCESSING_CONFIG
|
||||
from processors import ProcessingPipeline
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 最新的CSV文件路径
|
||||
CSV_FILE = "data/浙江省公共资源交易中心_20260213_161312.csv"
|
||||
|
||||
def read_csv_data(file_path):
|
||||
"""读取CSV文件数据"""
|
||||
data = []
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
reader = csv.DictReader(f)
|
||||
for row in reader:
|
||||
data.append(row)
|
||||
return data
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
logger.info("开始处理并上传最后8条数据")
|
||||
|
||||
# 1. 读取CSV数据
|
||||
if not os.path.exists(CSV_FILE):
|
||||
logger.error(f"CSV文件不存在: {CSV_FILE}")
|
||||
return
|
||||
|
||||
data = read_csv_data(CSV_FILE)
|
||||
logger.info(f"读取完成,共 {len(data)} 条数据")
|
||||
|
||||
if len(data) == 0:
|
||||
logger.error("无数据可处理")
|
||||
return
|
||||
|
||||
# 2. 取最后8条数据
|
||||
logger.info("\n2. 选择数据:")
|
||||
if len(data) >= 8:
|
||||
selected_data = data[-8:]
|
||||
else:
|
||||
selected_data = data
|
||||
|
||||
logger.info(f"选择了最后 {len(selected_data)} 条数据")
|
||||
|
||||
# 3. 处理数据
|
||||
logger.info("\n3. 处理数据:")
|
||||
pipeline = ProcessingPipeline()
|
||||
|
||||
processed = pipeline.process_results(
|
||||
selected_data,
|
||||
site="zhejiang",
|
||||
notice_type="招标文件公示",
|
||||
upload=True # 上传到简道云
|
||||
)
|
||||
|
||||
# 4. 展示结果
|
||||
logger.info("\n4. 处理结果:")
|
||||
logger.info(f"成功处理 {len(processed)} 条数据")
|
||||
|
||||
# 展示前3条的关键信息
|
||||
logger.info("\n前3条数据关键信息:")
|
||||
for i, record in enumerate(processed[:3], 1):
|
||||
logger.info(f"\n测试 {i}")
|
||||
logger.info(f"项目名称: {record.get('项目名称', '文档未提及')}")
|
||||
logger.info(f"项目批准文号: {record.get('项目批准文号', '文档未提及')}")
|
||||
logger.info(f"批准文号: {record.get('批准文号', '文档未提及')}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
0
utils/__init__.py
Normal file
0
utils/__init__.py
Normal file
195
utils/attachment.py
Normal file
195
utils/attachment.py
Normal file
@@ -0,0 +1,195 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
附件下载和解析模块
|
||||
支持PDF和Word文档
|
||||
"""
|
||||
import os
|
||||
import re
|
||||
import requests
|
||||
import pdfplumber
|
||||
from docx import Document
|
||||
from typing import Optional, Dict, List
|
||||
|
||||
|
||||
class AttachmentHandler:
|
||||
"""附件处理器"""
|
||||
|
||||
def __init__(self, download_dir: str = "attachments"):
|
||||
self.download_dir = download_dir
|
||||
os.makedirs(download_dir, exist_ok=True)
|
||||
|
||||
def download(self, url: str, filename: str = None) -> Optional[str]:
|
||||
"""
|
||||
下载附件
|
||||
|
||||
Args:
|
||||
url: 附件URL
|
||||
filename: 保存的文件名(可选)
|
||||
|
||||
Returns:
|
||||
保存的文件路径,失败返回None
|
||||
"""
|
||||
try:
|
||||
# 处理URL
|
||||
if not url.startswith('http'):
|
||||
return None
|
||||
|
||||
# 生成文件名
|
||||
if not filename:
|
||||
filename = url.split('/')[-1]
|
||||
# 清理文件名
|
||||
filename = re.sub(r'[<>:"/\\|?*]', '_', filename)
|
||||
|
||||
filepath = os.path.join(self.download_dir, filename)
|
||||
|
||||
# 下载文件
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120.0.0.0'
|
||||
}
|
||||
response = requests.get(url, headers=headers, timeout=60, stream=True)
|
||||
response.raise_for_status()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
f.write(chunk)
|
||||
|
||||
print(f" 下载成功: {filename}")
|
||||
return filepath
|
||||
|
||||
except Exception as e:
|
||||
print(f" 下载失败: {e}")
|
||||
return None
|
||||
|
||||
def extract_pdf_text(self, filepath: str) -> str:
|
||||
"""
|
||||
提取PDF文本内容
|
||||
|
||||
Args:
|
||||
filepath: PDF文件路径
|
||||
|
||||
Returns:
|
||||
提取的文本内容
|
||||
"""
|
||||
text = ""
|
||||
try:
|
||||
with pdfplumber.open(filepath) as pdf:
|
||||
for page in pdf.pages:
|
||||
page_text = page.extract_text()
|
||||
if page_text:
|
||||
text += page_text + "\n\n"
|
||||
except Exception as e:
|
||||
print(f" PDF解析失败: {e}")
|
||||
return text.strip()
|
||||
|
||||
def extract_docx_text(self, filepath: str) -> str:
|
||||
"""
|
||||
提取Word文档文本内容
|
||||
|
||||
Args:
|
||||
filepath: Word文件路径
|
||||
|
||||
Returns:
|
||||
提取的文本内容
|
||||
"""
|
||||
text = ""
|
||||
try:
|
||||
doc = Document(filepath)
|
||||
for para in doc.paragraphs:
|
||||
text += para.text + "\n"
|
||||
|
||||
# 提取表格内容
|
||||
for table in doc.tables:
|
||||
for row in table.rows:
|
||||
row_text = " | ".join([cell.text.strip() for cell in row.cells])
|
||||
text += row_text + "\n"
|
||||
except Exception as e:
|
||||
print(f" Word解析失败: {e}")
|
||||
return text.strip()
|
||||
|
||||
def extract_text(self, filepath: str) -> str:
|
||||
"""
|
||||
根据文件类型提取文本
|
||||
|
||||
Args:
|
||||
filepath: 文件路径
|
||||
|
||||
Returns:
|
||||
提取的文本内容
|
||||
"""
|
||||
if not filepath or not os.path.exists(filepath):
|
||||
return ""
|
||||
|
||||
ext = os.path.splitext(filepath)[1].lower()
|
||||
|
||||
if ext == '.pdf':
|
||||
return self.extract_pdf_text(filepath)
|
||||
elif ext in ['.doc', '.docx']:
|
||||
return self.extract_docx_text(filepath)
|
||||
else:
|
||||
return ""
|
||||
|
||||
def download_and_extract(self, url: str, filename: str = None) -> Dict:
|
||||
"""
|
||||
下载并提取附件内容
|
||||
|
||||
Args:
|
||||
url: 附件URL
|
||||
filename: 保存的文件名
|
||||
|
||||
Returns:
|
||||
包含文件路径和文本内容的字典
|
||||
"""
|
||||
result = {
|
||||
"url": url,
|
||||
"filepath": None,
|
||||
"text": "",
|
||||
"success": False
|
||||
}
|
||||
|
||||
filepath = self.download(url, filename)
|
||||
if filepath:
|
||||
result["filepath"] = filepath
|
||||
result["text"] = self.extract_text(filepath)
|
||||
result["success"] = True
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def find_attachments(page) -> List[Dict]:
|
||||
"""
|
||||
从页面中查找附件链接
|
||||
|
||||
Args:
|
||||
page: DrissionPage页面对象
|
||||
|
||||
Returns:
|
||||
附件信息列表 [{"name": "文件名", "url": "下载链接"}, ...]
|
||||
"""
|
||||
attachments = []
|
||||
|
||||
# 查找PDF链接
|
||||
pdf_links = page.eles('css:a[href*=".pdf"]')
|
||||
for link in pdf_links:
|
||||
href = link.attr('href') or ''
|
||||
name = link.text.strip() or href.split('/')[-1]
|
||||
if href:
|
||||
# 处理相对路径
|
||||
if href.startswith('/'):
|
||||
base_url = '/'.join(page.url.split('/')[:3])
|
||||
href = base_url + href
|
||||
attachments.append({"name": name, "url": href, "type": "pdf"})
|
||||
|
||||
# 查找Word链接
|
||||
doc_selectors = ['css:a[href*=".doc"]', 'css:a[href*=".docx"]']
|
||||
for sel in doc_selectors:
|
||||
doc_links = page.eles(sel)
|
||||
for link in doc_links:
|
||||
href = link.attr('href') or ''
|
||||
name = link.text.strip() or href.split('/')[-1]
|
||||
if href:
|
||||
if href.startswith('/'):
|
||||
base_url = '/'.join(page.url.split('/')[:3])
|
||||
href = base_url + href
|
||||
attachments.append({"name": name, "url": href, "type": "docx"})
|
||||
|
||||
return attachments
|
||||
111
随机提取分析报告.md
Normal file
111
随机提取分析报告.md
Normal file
@@ -0,0 +1,111 @@
|
||||
# 随机提取分析报告
|
||||
|
||||
生成时间: 2026-02-13 15:32:50
|
||||
|
||||
共提取了 5 条数据
|
||||
|
||||
## 提取结果 1
|
||||
|
||||
### 项目名称
|
||||
|
||||
|
||||
### 项目批准文号
|
||||
温发改【2024】52号、温发改【2025】98号
|
||||
|
||||
### 其他关键信息
|
||||
| 字段 | 值 |
|
||||
|------|------|
|
||||
| 类型 | 监理 |
|
||||
| 地区 | 台州市温岭市 |
|
||||
| 投标截止日 | 文档未提及 |
|
||||
| 最高投标限价 | 4089187元 |
|
||||
| 资质要求 | 水运工程乙级及以上监理资质 |
|
||||
| 业绩要求 | 自2021年1月1日(以实际交工日期为准)以来完成过一个水运工程堆场面积10万平方米及以上的新(或改)建堆场(修复工程除外)工程施工监理项目。 |
|
||||
| 评标办法 | 综合评估法 |
|
||||
| 有无答辩 | 无 |
|
||||
| 招标人 | 浙江白岩山港务有限公司 |
|
||||
|
||||
## 提取结果 2
|
||||
|
||||
### 项目名称
|
||||
|
||||
|
||||
### 项目批准文号
|
||||
东发改审批受理〔2024〕6号文件、东发改审批〔2025〕75号文件
|
||||
|
||||
### 其他关键信息
|
||||
| 字段 | 值 |
|
||||
|------|------|
|
||||
| 类型 | 总承包 |
|
||||
| 地区 | 金华市东阳市 |
|
||||
| 投标截止日 | 文档未提及 |
|
||||
| 最高投标限价 | 6427.24386万元 |
|
||||
| 资质要求 | 建筑工程施工总承包叁级及以上 |
|
||||
| 业绩要求 | 文档未提及 |
|
||||
| 评标办法 | 评定分离 |
|
||||
| 有无答辩 | 无 |
|
||||
| 招标人 | 中共东阳市委党史研究室 |
|
||||
|
||||
## 提取结果 3
|
||||
|
||||
### 项目名称
|
||||
|
||||
|
||||
### 项目批准文号
|
||||
临发改基综〔2025〕199号
|
||||
|
||||
### 其他关键信息
|
||||
| 字段 | 值 |
|
||||
|------|------|
|
||||
| 类型 | 市政 |
|
||||
| 地区 | 台州市临海市 |
|
||||
| 投标截止日 | 文档未提及 |
|
||||
| 最高投标限价 | 8784343元 |
|
||||
| 资质要求 | 市政公用工程施工总承包三级及以上 |
|
||||
| 业绩要求 | 文档未提及 |
|
||||
| 评标办法 | 评定分离 |
|
||||
| 有无答辩 | 无 |
|
||||
| 招标人 | 临海市杜桥镇人民政府 |
|
||||
|
||||
## 提取结果 4
|
||||
|
||||
### 项目名称
|
||||
|
||||
|
||||
### 项目批准文号
|
||||
浙发改项字〔2025〕14号
|
||||
|
||||
### 其他关键信息
|
||||
| 字段 | 值 |
|
||||
|------|------|
|
||||
| 类型 | 设计 |
|
||||
| 地区 | 衢州市江山市 |
|
||||
| 投标截止日 | 2026年 月 日9时00分 |
|
||||
| 最高投标限价 | 181万元 |
|
||||
| 资质要求 | 工程勘察专业类(岩土工程(勘察))甲级或工程勘察综合类甲级资质;公路行业(公路)专业设计甲级或公路行业设计甲级或工程设计综合甲级资质 |
|
||||
| 业绩要求 | 自2021年1月1日以来(以施工图设计批复或准予行政许可决定书时间为准),完成过1条一级及以上公路的勘察;自2021年1月1日以来(以施工图设计批复或准予行政许可决定书时间为准),完成过1条一级及以上公路的设计 |
|
||||
| 评标办法 | 综合评估法 |
|
||||
| 有无答辩 | 无 |
|
||||
| 招标人 | 江山市交通运输局 |
|
||||
|
||||
## 提取结果 5
|
||||
|
||||
### 项目名称
|
||||
|
||||
|
||||
### 项目批准文号
|
||||
舟发改审批[2025]67号
|
||||
|
||||
### 其他关键信息
|
||||
| 字段 | 值 |
|
||||
|------|------|
|
||||
| 类型 | 总承包 |
|
||||
| 地区 | 舟山市临城新区 |
|
||||
| 投标截止日 | 2026年 月 日9时00分 |
|
||||
| 最高投标限价 | 文档未提及 |
|
||||
| 资质要求 | 建筑工程施工总承包贰级及以上 |
|
||||
| 业绩要求 | 文档未提及 |
|
||||
| 评标办法 | 综合评估法 |
|
||||
| 有无答辩 | 有 |
|
||||
| 招标人 | 浙江国际海运职业技术学院 |
|
||||
|
||||
Reference in New Issue
Block a user