Hadoop:yarn的Rust API接口

今天头一次接触了yarn的Rust API接口,在本地搭建了集群,能够得到每个任务的详细信息。

(一)得到所有任务的所有信息命令:

默认是json格式,也可以指定xml的格式,如(curl --compressed -H "Accept: application/xml" -X)

注意:这里的端口号是集群的resourceManager所在的节点

bash 复制代码
[root@node141 hadoop]# curl -X GET "http://node142:8088/ws/v1/cluster/apps"

由于返回的数据过多,这里截个图:

(二)得到指定任务的所有信息命令:

bash 复制代码
[root@node141 hadoop]# curl --compressed -H "Accept: application/json" -X GET "http://node142:8088/ws/v1/cluster/apps/application_1696942734063_0004"

或者
curl -X GET http://node142:8088/ws/v1/cluster/apps/application_1696942734063_0004

返回结果:

rust 复制代码
{
    "app": {
        "id": "application_1696942734063_0004",
        "user": "root",
        "name": "word count",
        "queue": "default",
        "state": "FINISHED",
        "finalStatus": "SUCCEEDED",
        "progress": 100.0,
        "trackingUI": "History",
        "trackingUrl": "http://node142:8088/proxy/application_1696942734063_0004/",
        "diagnostics": "",
        "clusterId": 1696942734063,
        "applicationType": "MAPREDUCE",
        "applicationTags": "",
        "priority": 0,
        "startedTime": 1696943373204,
        "launchTime": 1696943373820,
        "finishedTime": 1696943387557,
        "elapsedTime": 14353,
        "amContainerLogs": "http://node141:8042/node/containerlogs/container_1696942734063_0004_01_000001/root",
        "amHostHttpAddress": "node141:8042",
        "amRPCAddress": "node141:34142",
        "masterNodeId": "node141:37268",
        "allocatedMB": -1,
        "allocatedVCores": -1,
        "reservedMB": -1,
        "reservedVCores": -1,
        "runningContainers": -1,
        "memorySeconds": 48979,
        "vcoreSeconds": 26,
        "queueUsagePercentage": 0.0,
        "clusterUsagePercentage": 0.0,
        "resourceSecondsMap": {
            "entry": {
                "key": "memory-mb",
                "value": "48979"
            },
            "entry": {
                "key": "vcores",
                "value": "26"
            }
        },
        "preemptedResourceMB": 0,
        "preemptedResourceVCores": 0,
        "numNonAMContainerPreempted": 0,
        "numAMContainerPreempted": 0,
        "preemptedMemorySeconds": 0,
        "preemptedVcoreSeconds": 0,
        "preemptedResourceSecondsMap": {
            
        },
        "logAggregationStatus": "SUCCEEDED",
        "unmanagedApplication": false,
        "amNodeLabelExpression": "",
        "timeouts": {
            "timeout": [
                {
                    "type": "LIFETIME",
                    "expiryTime": "UNLIMITED",
                    "remainingTimeInSeconds": -1
                }
            ]
        }
    }
}

(三)过滤出state == "FINISHED"的json------失败

bash 复制代码
curl http://node142:8088/ws/v1/cluster/apps | jq '.apps.app[] | select(.state == "FINISHED")' > /opt/data/running_apps.json
javascript 复制代码
[root@node141 ~]# cat /opt/data/running_apps.json 
{
  "id": "application_1696942734063_0001",
  "user": "root",
  "name": "word count",
  "queue": "default",
  "state": "FINISHED",
 ......
}
{
  "id": "application_1696942734063_0002",
  "user": "root",
  "name": "word count",
  "queue": "default",
  "state": "FINISHED",
 ......
}
{
  "id": "application_1696942734063_0003",
  "user": "root",
  "name": "word count",
  "queue": "default",
  "state": "FINISHED",
  ......
}
{
  "id": "application_1696942734063_0004",
  "user": "root",
  "name": "word count",
  "queue": "default",
  "state": "FINISHED",
  ......
}
相关推荐
用户Taobaoapi201429 分钟前
淘宝商品列表查询 API 接口详解
大数据
fanxiaohui121381 小时前
元脑服务器的创新应用:浪潮信息引领AI计算新时代
运维·服务器·人工智能
涛思数据(TDengine)1 小时前
taosd 写入与查询场景下压缩解压及加密解密的 CPU 占用分析
大数据·数据库·时序数据库·tdengine
DuDuTalk1 小时前
DuDuTalk接入DeepSeek,重构企业沟通数字化新范式
大数据·人工智能
大数据追光猿2 小时前
Qwen 模型与 LlamaFactory 结合训练详细步骤教程
大数据·人工智能·深度学习·计算机视觉·语言模型
Elastic 中国社区官方博客2 小时前
使用 Elastic-Agent 或 Beats 将 Journald 中的 syslog 和 auth 日志导入 Elastic Stack
大数据·linux·服务器·elasticsearch·搜索引擎·信息可视化·debian
对许3 小时前
Hadoop的运行模式
大数据·hadoop·分布式
天空卫士4 小时前
AI巨浪中的安全之舵:天空卫士助力人工智能落地远航
大数据·人工智能·安全·网络安全·数据安全
pyliumy6 小时前
在基于Arm架构的华为鲲鹏服务器上,针对openEuler 20.03 LTS操作系统, 安装Ansible 和MySQL
服务器·架构·ansible
向日葵花子(* ̄︶ ̄)6 小时前
hive sql limit offset不起作用
hadoop