Ansible

基于ssh远程命令的部署工具

文章目录

    • [1. 执行任务](#1. 执行任务)
    • [2. hadoop+flink安装示例](#2. hadoop+flink安装示例)
      • [2.1. 整体目录](#2.1. 整体目录)
      • [2.2. inventories/hosts](#2.2. inventories/hosts)
      • [2.3. 入口 playbook:site.yml](#2.3. 入口 playbook:site.yml)
      • [2.4. Role 1:hadoop](#2.4. Role 1:hadoop)
      • [2.5. Role 2:flink](#2.5. Role 2:flink)
      • [2.6. 一键执行](#2.6. 一键执行)
      • [2.7. 验证](#2.7. 验证)

1. 执行任务

bash 复制代码
ansible-playbook -i host.ini task.yml -t mytag

2. hadoop+flink安装示例

2.1. 整体目录

复制代码
bigdata-ansible/
├── inventories
│   └── hosts
├── site.yml
└── roles/
    ├── hadoop/
    │   ├── defaults/main.yml
    │   ├── tasks/main.yml
    │   ├── handlers/main.yml
    │   └── templates/hadoop-env.sh.j2
    └── flink/
        ├── defaults/main.yml
        ├── tasks/main.yml
        ├── handlers/main.yml
        └── templates/flink-conf.yaml.j2

2.2. inventories/hosts

ini 复制代码
[bigdata]
192.168.56.10 ansible_user=ubuntu ansible_become=yes

2.3. 入口 playbook:site.yml

yaml 复制代码
---
- name: 部署 Hadoop
  hosts: bigdata
  become: yes # 使用root权限,ansible会用sudo执行远程命令
  roles:
    - hadoop
  tags: deploy_hadoop
  
- name: 部署 Flink
  hosts: bigdata
  become: yes # 使用root权限,ansible会用sudo执行远程命令
  roles:
    - flink
  tags: deploy_flink

2.4. Role 1:hadoop

  1. roles/hadoop/defaults/main.yml

    用来存放默认变量,优先级最低

    yaml 复制代码
    hadoop_version: 3.3.6
    hadoop_home:    /opt/hadoop-{{ hadoop_version }}
    java_home:      /usr/lib/jvm/java-8-openjdk-amd64
    hadoop_user:    hadoop
  2. roles/hadoop/tasks/main.yml

    任务配置

    yaml 复制代码
    ---
    - name: 安装 Java8
      apt:
        name: openjdk-8-jdk
        state: present
        update_cache: yes
    
    - name: 创建 hadoop 用户
      user:
        name: "{{ hadoop_user }}"
        system: yes
        shell: /bin/bash
        home: "/home/{{ hadoop_user }}"
    
    - name: 下载 Hadoop 二进制包
      get_url:
        url: "https://downloads.apache.org/hadoop/common/hadoop-{{ hadoop_version }}/hadoop-{{ hadoop_version }}.tar.gz"
        dest: /tmp/hadoop-{{ hadoop_version }}.tar.gz
        checksum: sha512:https://downloads.apache.org/hadoop/common/hadoop-{{ hadoop_version }}/hadoop-{{ hadoop_version }}.tar.gz.sha512
    
    - name: 解压到 /opt
      unarchive:
        src: /tmp/hadoop-{{ hadoop_version }}.tar.gz
        dest: /opt
        remote_src: yes
        owner: root
        group: root
    
    - name: 创建软链接 /opt/hadoop
      file:
        src: "{{ hadoop_home }}"
        dest: /opt/hadoop
        state: link
    
    - name: 渲染 hadoop-env.sh
      template:
        src: hadoop-env.sh.j2
        dest: "{{ hadoop_home }}/etc/hadoop/hadoop-env.sh"
      notify: restart hadoop
    
    - name: 初始化 core-site.xml(伪分布式)
      copy:
        content: |
          <configuration>
            <property>
              <name>fs.defaultFS</name>
              <value>hdfs://localhost:9000</value>
            </property>
          </configuration>
        dest: "{{ hadoop_home }}/etc/hadoop/core-site.xml"
      notify: restart hadoop
    
    - name: 创建 namenode/datanode 本地目录
      file:
        path: "/data/hdfs/{{ item }}"
        state: directory
        owner: "{{ hadoop_user }}"
        mode: '0755'
      loop:
        - namenode
        - datanode
    
    - name: 格式化 NameNode
      become_user: "{{ hadoop_user }}"
      shell: "{{ hadoop_home }}/bin/hdfs namenode -format -nonInteractive"
      args:
        creates: /data/hdfs/namenode/current/VERSION
    
    - name: 启动 NameNode & DataNode
      become_user: "{{ hadoop_user }}"
      shell: "{{ hadoop_home }}/sbin/hadoop-daemon.sh --config {{ hadoop_home }}/etc/hadoop --script hdfs start {{ item }}"
      loop:
        - namenode
        - datanode
    
    - name: 等待 NameNode 9870 端口
      wait_for:
        port: 9870
        timeout: 60
  3. roles/hadoop/templates/hadoop-env.sh.j2

    模板文件

    bash 复制代码
    export JAVA_HOME={{ java_home }}
    export HADOOP_HOME={{ hadoop_home }}
    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
    export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
  4. roles/hadoop/handlers/main.yml

    定义可以被任务notify的动作,通常是重启,notify 不会立即执行,而是等 当前 play 所有 task 跑完后 再按顺序去调用 handler。

    yaml 复制代码
    - name: restart hadoop
      become_user: "{{ hadoop_user }}"
      shell: |
        {{ hadoop_home }}/sbin/hadoop-daemon.sh stop namenode || true
        {{ hadoop_home }}/sbin/hadoop-daemon.sh stop datanode || true
        {{ hadoop_home }}/sbin/hadoop-daemon.sh start namenode
        {{ hadoop_home }}/sbin/hadoop-daemon.sh start datanode

  1. roles/flink/defaults/main.yml

    yaml 复制代码
    flink_version: 1.18.0
    scala_version: 2.12
    flink_name: "flink-{{ flink_version }}-bin-scala_{{ scala_version }}"
    flink_tar: "{{ flink_name }}.tgz"
    flink_home: /opt/{{ flink_name }}
    java_home: /usr/lib/jvm/java-8-openjdk-amd64
  2. roles/flink/tasks/main.yml

    yaml 复制代码
    ---
    - name: 下载 Flink 安装包
      get_url:
        url: "https://downloads.apache.org/flink/flink-{{ flink_version }}/{{ flink_tar }}"
        dest: /tmp/{{ flink_tar }}
        checksum: sha512:https://downloads.apache.org/flink/flink-{{ flink_version }}/{{ flink_tar }}.sha512
    
    - name: 解压到 /opt
      unarchive:
        src: /tmp/{{ flink_tar }}
        dest: /opt
        remote_src: yes
    
    - name: 创建软链接 /opt/flink
      file:
        src: "{{ flink_home }}"
        dest: /opt/flink
        state: link
    
    - name: 渲染 flink-conf.yaml
      template:
        src: flink-conf.yaml.j2
        dest: "{{ flink_home }}/conf/flink-conf.yaml"
      notify: restart flink
    
    - name: 创建 systemd 单元文件
      copy:
        content: |
          [Unit]
          Description=Apache Flink
          After=network.target
          [Service]
          Type=forking
          User=root
          ExecStart={{ flink_home }}/bin/start-cluster.sh
          ExecStop={{ flink_home }}/bin/stop-cluster.sh
          Restart=on-failure
          [Install]
          WantedBy=multi-user.target
        dest: /etc/systemd/system/flink.service
      notify: restart flink
    
    - name: 启动 Flink 集群
      systemd:
        name: flink
        daemon_reload: yes
        state: started
        enabled: yes
    
    - name: 等待 JobManager 8081 端口
      wait_for:
        port: 8081
        timeout: 60
  3. roles/flink/templates/flink-conf.yaml.j2

    yaml 复制代码
    jobmanager.rpc.address: localhost
    jobmanager.rpc.port: 6123
    taskmanager.numberOfTaskSlots: 4
    parallelism.default: 4
  4. roles/flink/handlers/main.yml

    yaml 复制代码
    - name: restart flink
      systemd:
        name: flink
        state: restarted

2.6. 一键执行

复制代码
```bash
# 1. 安装 Ansible(Ubuntu 示例)
sudo apt update && sudo apt install -y ansible

# 2. 把目录摆好,进入项目根
cd bigdata-ansible

# 3. 跑
ansible-playbook -i inventories/hosts site.yml
```

2.7. 验证

相关推荐
ylmzfun21 小时前
基于Ansible的自动化运维实战:从入门到企业级应用
运维·架构·ansible
码界奇点1 天前
基于Django与Ansible的智能运维管理系统设计与实现
运维·python·django·毕业设计·ansible·源代码管理
乾元3 天前
AI + Jinja2/Ansible:从自然语义到可执行 Playbook 的完整流水线(工程级深度)
运维·网络·人工智能·网络协议·华为·自动化·ansible
聊天QQ:688238865 天前
光伏MPPT仿真:布谷鸟算法的奇妙结合
ansible
weixin_46686 天前
Ansible Playbook应用
网络·ansible
孪生质数-16 天前
Ansible基础入门
服务器·自动化·ansible·openstack
Empty_77716 天前
Ansible变量
服务器·github·ansible
哲Zheᗜe༘16 天前
学习Ansible Playbook 核心语法
网络·学习·ansible
凤凰战士芭比Q17 天前
Ansible剧本、变量、判断、循环
ansible