因为项目要调用第三方网站接口获取信息(类似python爬虫),所以在后端项目中需要前端axios一样去构建请求获取信息。下面用简单的获取IP地址为例。
java
@Slf4j
public class IPTest {
// IP地址查询
public static final String IP_URL = "http://whois.pconline.com.cn/ipJson.jsp";
// 未知地址
public static final String UNKNOWN = "未知地址";
private String getRealAddressByIP(String ip) {
String address = UNKNOWN;
if (true) {
try {
String rspStr = sendGet(IP_URL, "ip=" + ip + "&json=true" ,"GBK");
if (StrUtil.isEmpty(rspStr)) {
return UNKNOWN;
}
JSONObject obj = JSONObject.parseObject(rspStr);
String addr = obj.getString("addr");
return String.format("%s" , addr);
} catch (Exception e) {
}
}
return address;
}
private String sendGet(String url, String param, String contentType) {
StringBuilder result = new StringBuilder();
BufferedReader in = null;
try {
String urlNameString = url + "?" + param;
URL realUrl = new URL(urlNameString);
URLConnection connection = realUrl.openConnection();
connection.setRequestProperty("accept" , "*/*");
connection.setRequestProperty("connection" , "Keep-Alive");
connection.setRequestProperty("User-Agent","Mozilla/4.0 compatible; MSIE 6.0; Windows NT 5.1;DigExt");
connection.connect();
in = new BufferedReader(new InputStreamReader(connection.getInputStream(), contentType));
String line;
while ((line = in.readLine()) != null) {
result.append(line);
}
} catch (ConnectException e) {
} catch (SocketTimeoutException e) {
} catch (IOException e) {
} catch (Exception e) {
} finally {
try {
if (in != null) {
in.close();
}
} catch (Exception ex) {
}
}
return result.toString();
}
public static void main(String[] args) {
String ipaddr = getRealAddressByIP(你的ip);
String address = ipaddr.equals("未知地址") ? "位置获取失败" : ipaddr;
log.info(address);
}
}
有些网站会做反爬机制,所以在创建URL的时候,加上这句 connection.setRequestProperty("User-Agent","Mozilla/4.0 compatible; MSIE 6.0; Windows NT 5.1;DigExt");可以解决大部分问题