pycharm debug的时候无法debug到指定的位置就停住不动了

报错大致是这样的,但是直接run没有问题,debug就停住不动了

Traceback (most recent call last):

File "/home/mapengsen/.pycharm_helpers/pydev/_pydevd_bundle/pydevd_comm.py", line 467, in start_client

s.connect((host, port))

TimeoutError: timed out

Traceback (most recent call last):

File "<frozen importlib._bootstrap>", line 1027, in _find_and_load

File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked

14:30:48.928250\] \[14:30:48.928492\] \[14:30:48.928599\] \[14:30:48.950877\] \[14:30:48.951222\] \[14:30:48.951351\] File "\", line 688, in _load_unlocked File "\", line 883, in exec_module File "\", line 241, in _call_with_frames_removed Could not connect to 127.0.0.1: 56945 Traceback (most recent call last): File "/home/mapengsen/.pycharm_helpers/pydev/_pydevd_bundle/pydevd_comm.py", line 467, in start_client s.connect((host, port)) TimeoutError: timed out File "/home/mapengsen/anaconda3/envs/MDT2/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 26, in \ Traceback (most recent call last): File "\", line 1027, in _find_and_load File "\", line 1006, in _find_and_load_unlocked File "\", line 688, in _load_unlocked File "\", line 883, in exec_module from torch._inductor.codecache import code_hash, CompiledFxGraph File "/home/mapengsen/anaconda3/envs/MDT2/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1424, in \ File "\", line 241, in _call_with_frames_removed File "/home/mapengsen/anaconda3/envs/MDT2/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 26, in \ from torch._inductor.codecache import code_hash, CompiledFxGraph File "/home/mapengsen/anaconda3/envs/MDT2/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1424, in \ AsyncCompile.warm_pool()AsyncCompile.warm_pool() File "/home/mapengsen/anaconda3/envs/MDT2/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1363, in warm_pool File "/home/mapengsen/anaconda3/envs/MDT2/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1363, in warm_pool pool._adjust_process_count() File "/home/mapengsen/anaconda3/envs/MDT2/lib/python3.10/concurrent/futures/process.py", line 697, in _adjust_process_count pool._adjust_process_count() File "/home/mapengsen/anaconda3/envs/MDT2/lib/python3.10/concurrent/futures/process.py", line 697, in _adjust_process_count Could not connect to 127.0.0.1: 56945

后来才发现是自己 import 自己定义的datasets的时候出现了错误,因为我是在自己定义的datasets中进行了测试,但是里面有错误,然后我还在主程序中import了这个datasets,所以一直停住不动。把dataset报错的地方删除就行,只留方法部分:

def collate_fn_paired_skip_invalid(batch):

if len(batch[0]) == 5: # 单任务情况 (添加了task_id)

valid_batch_items = [item for item in batch if item[0] is not None and item[2] is not None]

if not valid_batch_items:

return torch.empty(0), torch.empty(0, 0), torch.empty(0), torch.empty(0, 0), torch.empty(0, dtype=torch.long)

return torch.utils.data.dataloader.default_collate(valid_batch_items)

else: # 多任务情况 (7个元素,添加了task_id)

valid_batch_items = [item for item in batch if item[0] is not None and item[2] is not None and item[4] is not None]

if not valid_batch_items:

return torch.empty(0), torch.empty(0, 0), torch.empty(0), torch.empty(0, 0), torch.empty(0), torch.empty(0, 0), torch.empty(0, dtype=torch.long)

return torch.utils.data.dataloader.default_collate(valid_batch_items)

删除下面的,以免有错误

# --- 主训练循环 ---

trained_models_per_task = {}

# 假设您在这里定义了 all_task_names

all_task_names = [['A_bioavailability_ma'], ['A_hia_hou'], ['A_bioavailability_ma', 'A_hia_hou']]

for current_task_names in all_task_names:

task_key = '+'.join(current_task_names) # 创建任务组合的键名

print(f"\n--- 开始为任务组合: {task_key} 准备数据和模型 (Paired Data) ---")

相关推荐
Edingbrugh.南空9 分钟前
Flink 2.0 DataStream算子全景
人工智能·flink
bin915313 分钟前
飞算 JavaAI:开启 Java 开发新时代
java·人工智能
哈__24 分钟前
学弟让我帮忙写一个学生管理系统的后端,我直接上科技
人工智能·ai
云空1 小时前
《探索电脑麦克风声音采集多窗口实时可视化技术》
人工智能·python·算法
麦兜*1 小时前
【Spring Boot】Spring Boot 4.0 的颠覆性AI特性全景解析,结合智能编码实战案例、底层架构革新及Prompt工程手册
java·人工智能·spring boot·后端·spring·架构
张较瘦_1 小时前
[论文阅读] 人工智能 | 5C提示词框架的研究
论文阅读·人工智能
超龄超能程序猿1 小时前
使用 Python 对本地图片进行图像分类
开发语言·人工智能·python·机器学习·分类·数据挖掘·scipy
大千AI助手1 小时前
RLHF:人类反馈强化学习 | 对齐AI与人类价值观的核心引擎
人工智能·深度学习·算法·机器学习·强化学习·rlhf·人类反馈强化学习
我爱一条柴ya2 小时前
【AI大模型】RAG系统组件:向量数据库(ChromaDB)
数据库·人工智能·pytorch·python·ai·ai编程