本文档包含编程请求http代理服务器的代码样例,供开发者参考。

代码样例使用说明

  1. 代码样例不能直接运行,因为代码中的订单号9266892014xxxxx、代理ip和端口号59.38.241.25:23916、用户名username、密码password都是虚构的,请替换成您自己的信息。
  2. 代码样例正常运行所需的运行环境和注意事项在样例末尾均有说明,使用前请仔细阅读。
  3. 使用代码样例过程中遇到问题请,我们会为您提供尊龙现金一下的技术支持。

python3

requests

requests(推荐)

使用提示

  1. 基于的代码样例支持访问http,https网页,推荐使用
  2. requests不是python原生库,需要安装才能使用: pip install requests
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
使用requests请求代理服务器
请求http和https网页均适用
"""
import requests
# 提取代理api接口,获取1个代理ip
api_url = "http://dps.kdlapi.com/api/getdps/?orderid=9266892014xxxxx&num=1&pt=1&sep=1"
# 获取api接口返回的代理ip
proxy_ip = requests.get(api_url).text
# 用户名密码认证(私密代理/独享代理)
username = "username"
password = "password"
proxies = {
    "http": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": proxy_ip},
    "https": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": proxy_ip}
}
# 白名单方式(需提前设置白名单)
# proxies = {
#     "http": "http://%(proxy)s/" % {"proxy": proxy_ip},
#     "https": "http://%(proxy)s/" % {"proxy": proxy_ip}
# }
# 要访问的目标网页
target_url = "https://dev.kdlapi.com/testproxy"
# 使用代理ip发送请求
response = requests.get(target_url, proxies=proxies)
# 获取页面内容
if response.status_code == 200:
    print(response.text)

aiohttp

aiohttp

使用提示

  1. 基于的代码样例支持访问http,https网页
  2. aiohttp不是python原生库,需要安装才能使用: pip install aiohttp
  3. aiohttp只支持python3.5及以上
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
使用aiohttp请求代理服务器
请求http和https网页均适用
"""
import aiohttp
import asyncio
import requests
page_url = "https://dev.kdlapi.com/testproxy"  # 要访问的目标网页
# api接口,返回格式为json
api_url = "http://dps.kdlapi.com/api/getdps/?orderid=9266892014xxxxx&num=1&pt=1&sep=1"  # api接口
# api接口返回的ip
proxy_ip = requests.get(api_url).text
# 用户名密码认证(私密代理/独享代理)
username = "username"
password = "password"
proxy_auth = aiohttp.basicauth(username, password)
async def fetch(session, url):
    async with session.get(url, proxy="http://"proxy_ip, proxy_auth=proxy_auth) as response:
        return await response.text()
async def main():
    # aiohttp默认使用严格的https协议检查。可以通过将ssl设置为false来放松认证检查
    # async with aiohttp.clientsession(connector=aiohttp.tcpconnector(ssl=false)) as session:
    async with aiohttp.clientsession() as session:
        html = await fetch(session, page_url)
        print(html)
if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())

httpx

httpx

使用提示

  1. 基于的代码样例支持访问http,https网页
  2. httpx不是python原生库,需要安装才能使用: pip install httpx
  3. httpx运行环境要求 python3.6
  4. httpx暂时还不支持socks代理
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
使用requests请求代理服务器
请求http和https网页均适用
"""
import httpx
import requests
# api接口,返回格式为json
api_url = "http://dps.kdlapi.com/api/getdps/?orderid=9266892014xxxxx&num=1&pt=1&sep=1"  # api接口
# api接口返回的ip
proxy_ip = requests.get(api_url).text
# 用户名密码认证(私密代理/独享代理)
username = "username"
password = "password"
proxy_url = "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": proxy_ip}
proxies = httpx.proxy(
    url=proxy_url,
    mode="default"
)
with httpx.client(proxies=proxies) as client:
    r = client.get('http://dev.kdlapi.com/testproxy')
    print(r.text)

urllib

urllib

使用提示

  • 基于urllib的代码样例同时支持访问http和https网页
  • 运行环境要求 python3.x
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
使用urllib请求代理服务器
请求http和https网页均适用
"""
import urllib.request
import ssl
# 全局取消证书验证,避免访问https网页报错
ssl._create_default_https_context = ssl._create_unverified_context
# 提取代理api接口,获取1个代理ip
api_url = "http://dps.kdlapi.com/api/getdps/?orderid=9266892014xxxxx&num=1&pt=1&sep=1"
# 获取api接口返回的ip
proxy_ip = urllib.request.urlopen(api_url).read().decode('utf-8')
# 用户名密码认证(私密代理/独享代理)
username = "username"
password = "password"
proxies = {
    "http": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": proxy_ip},
    "https": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": proxy_ip}
}
# 白名单方式(需提前设置白名单)
# proxies = {
#     "http": "http://%(proxy)s/" % {"proxy": proxy_ip},
#     "https": "http://%(proxy)s/" % {"proxy": proxy_ip}
# }
# 要访问的目标网页
target_url = "https://dev.kdlapi.com/testproxy"
# 使用代理ip发送请求
proxy_support = urllib.request.proxyhandler(proxies)
opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)
response = urllib.request.urlopen(target_url)
# 获取页面内容
if response.code == 200:
    print(response.read().decode('utf-8'))

python2

requests

requests(推荐)

使用提示

  1. 基于的代码样例支持访问http,https网页,推荐使用
  2. requests不是python原生库,需要安装才能使用: pip install requests
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
使用requests请求代理服务器
请求http和https网页均适用
"""
import requests
# 提取代理api接口,获取1个代理ip
api_url = "http://dps.kdlapi.com/api/getdps/?orderid=9266892014xxxxx&num=1&pt=1&sep=1"
# 获取api接口返回的代理ip
proxy_ip = requests.get(api_url).text
# 用户名密码认证(私密代理/独享代理)
username = "username"
password = "password"
proxies = {
    "http": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": proxy_ip},
    "https": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": proxy_ip}
}
# 白名单方式(需提前设置白名单)
# proxies = {
#     "http": "http://%(proxy)s/" % {"proxy": proxy_ip},
#     "https": "https://%(proxy)s/" % {"proxy": proxy_ip}
# }
# 要访问的目标网页
target_url = "https://dev.kdlapi.com/testproxy"
# 使用代理ip发送请求
response = requests.get(target_url, proxies=proxies)
# 获取页面内容
if response.status_code == 200:
    print response.text

urllib2

urllib2

使用提示

  • 基于urllib2的代码样例同时支持访问http和https网页
  • 运行环境要求 python2.6 / 2.7
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
使用urllib2请求代理服务器
请求http和https网页均适用
"""
import urllib2
import ssl
# 全局取消证书验证,避免访问https网页报错
ssl._create_default_https_context = ssl._create_unverified_context  
# 提取代理api接口,获取1个代理ip
api_url = "http://dps.kdlapi.com/api/getdps/?orderid=9266892014xxxxx&num=1&pt=1&sep=1"
# 获取api接口返回的ip
proxy_ip = urllib2.urlopen(api_url).read()
# 用户名密码认证(私密代理/独享代理)
username = "username"
password = "password"
proxies = {
    "http": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": proxy_ip},
    "https": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": proxy_ip}
}
# 白名单方式(需提前设置白名单)
# proxies = {
#     "http": "http://%(proxy)s/" % {"proxy": proxy_ip},
#     "https": "http://%(proxy)s/" % {"proxy": proxy_ip}
# }
# 要访问的目标网页
target_url = "https://dev.kdlapi.com/testproxy"
# 使用代理ip发送请求
proxy_support = urllib2.proxyhandler(proxies)
opener = urllib2.build_opener(proxy_support)
urllib2.install_opener(opener)
response = urllib2.urlopen(target_url)
# 获取页面内容
if response.code == 200:
    print response.read()

python-selenium

chrome

chrome(ip白名单,推荐)

使用提示

  1. 基于白名单方式使用selenium chrome认证代理
  2. 运行环境要求python2/3 selenium chrome chromedriver windows/linux/macos
  3. (注意chromedriver版本要和chrome版本对应)
  4. selenium不是python原生库,需要安装才能使用:pip install selenium
  5. 请注意替换代码中的部分信息:
    ${ip:port}:代理ip:端口号,如:"59.38.241.25:23916"
    ${chromedriver_path}:您本机chromedriver驱动存放路径,如:"c:\chromedriver.exe"
#!/usr/bin/env python
# encoding: utf-8
from selenium import webdriver
import time
options = webdriver.chromeoptions()
options.add_argument('--proxy-server=http://${ip:port}')  # 代理ip:端口号
# ${chromedriver_path}: chromedriver驱动存放路径
driver = webdriver.chrome(executable_path="${chromedriver_path}", options=options)
driver.get("https://dev.kdlapi.com/testproxy")
# 获取页面内容
print(driver.page_source)
# 延迟3秒后关闭当前窗口,如果是最后一个窗口则退出
time.sleep(3)
driver.close()
chrome(用户名密码认证)

使用提示

  1. 基于用户名密码方式使用selenium chrome认证代理
  2. 运行环境要求python2/3 selenium chrome chromedriver windows/linux/macos
  3. (注意chromedriver版本要和chrome版本对应)
  4. selenium不是python原生库,需要安装才能使用:pip install selenium
  5. 请注意替换代码中的部分信息:
    ${proxy_ip}:代理ip
    ${proxy_port}:端口号
    ${username}}:用户名
    ${password}:密码
    ${chromedriver_path}:您本机chromedriver驱动存放路径,如:"c:\chromedriver.exe"
#!/usr/bin/env python
# encoding: utf-8
from selenium import webdriver
import string
import zipfile
import time
def create_proxyauth_extension(proxy_host, proxy_port, proxy_username, proxy_password, scheme='http', plugin_path=none):
    """代理认证插件
    args:
        proxy_host (str): 你的代理地址或者域名(str类型)
        proxy_port (int): 代理端口号(int类型)
        # 用户名密码认证(私密代理/独享代理)
        proxy_username (str):用户名(字符串)
        proxy_password (str): 密码 (字符串)
    kwargs:
        scheme (str): 代理方式 默认http
        plugin_path (str): 扩展的绝对路径
    return str -> plugin_path
    """
    if plugin_path is none:
        plugin_path = 'vimm_chrome_proxyauth_plugin.zip'
    manifest_json = """
    {
        "version": "1.0.0",
        "manifest_version": 2,
        "name": "chrome proxy",
        "permissions": [
            "proxy",
            "tabs",
            "unlimitedstorage",
            "storage",
            "",
            "webrequest",
            "webrequestblocking"
        ],
        "background": {
            "scripts": ["background.js"]
        },
        "minimum_chrome_version":"22.0.0"
    }
    """
    background_js = string.template(
        """
        var config = {
                mode: "fixed_servers",
                rules: {
                singleproxy: {
                    scheme: "${scheme}",
                    host: "${host}",
                    port: parseint(${port})
                },
                bypasslist: ["foobar.com"]
                }
            };
        chrome.proxy.settings.set({value: config, scope: "regular"}, function() {});
        function callbackfn(details) {
            return {
                authcredentials: {
                    username: "${username}",
                    password: "${password}"
                }
            };
        }
        chrome.webrequest.onauthrequired.addlistener(
                    callbackfn,
                    {urls: [""]},
                    ['blocking']
        );
        """
    ).substitute(
        host=proxy_host,
        port=proxy_port,
        username=proxy_username,
        password=proxy_password,
        scheme=scheme,
    )
    with zipfile.zipfile(plugin_path, 'w') as zp:
        zp.writestr("manifest.json", manifest_json)
        zp.writestr("background.js", background_js)
    return plugin_path
proxyauth_plugin_path = create_proxyauth_extension(
    proxy_host="${proxy_ip}",  # 代理ip
    proxy_port="${proxy_port}",  # 端口号
    # 用户名密码(私密代理/独享代理)
    proxy_username="${username}", 
    proxy_password="${password}"
)
options = webdriver.chromeoptions()
options.add_extension(proxyauth_plugin_path)
# ${chromedriver_path}: chromedriver驱动存放路径
driver = webdriver.chrome(executable_path="${chromedriver_path}", options=options)
driver.get("https://dev.kdlapi.com/testproxy")
# 获取页面内容
print(driver.page_source)
# 延迟3秒后关闭当前窗口,如果是最后一个窗口则退出
time.sleep(3)
driver.close()

firefox

firefox(ip白名单,推荐)

使用提示

  1. 基于白名单方式使用selenium firefox认证代理
  2. 运行环境要求python2/3 selenium firefox geckodriver windows/linux/macos
  3. (注意geckodriver版本要和firefox版本对应)
  4. selenium不是python原生库,需要安装才能使用:pip install selenium
  5. 请注意替换代码中的部分信息:
    ${ip:port}:代理ip:端口号,如:"59.38.241.25:23916"
    ${geckodriver_path}:您本机geckodriver驱动存放路径,如:"c:\geckodriver.exe"
#!/usr/bin/env python
# encoding: utf-8
from selenium import webdriver
import time
fp = webdriver.firefoxprofile()
proxy = '${ip:port}'
ip, port = proxy.split(":")
port = int(port)
# 设置代理配置
fp.set_preference('network.proxy.type', 1)
fp.set_preference('network.proxy.http', ip)
fp.set_preference('network.proxy.http_port', port)
fp.set_preference('network.proxy.ssl', ip)
fp.set_preference('network.proxy.ssl_port', port)
driver = webdriver.firefox(executable_path="${geckodriver_path}", firefox_profile=fp)
driver.get('https://dev.kdlapi.com/testproxy')
# 获取页面内容
print(driver.page_source)
# 延迟3秒后关闭当前窗口,如果是最后一个窗口则退出
time.sleep(3)
driver.close()

python-scrapy

使用提示

  1. http/https网页均可适用
  2. scrapy不是python原生库,需要安装才能使用: pip install scrapy
  3. 在第一级tutorial目录下运行如下命令查看结果:scrapy crawl kdl
scrapy项目目录

运行命令:scrapy startproject tutorial 新建scrapy项目,创建包含下列内容的tutorial目录

tutorial/
    scrapy.cfg # 项目的配置文件
    tutorial/ # 该项目的python模块。之后您将在此加入代码
        __init__.py
        items.py # 项目中的item文件
        pipelines.py # 项目中的pipelines文件
        settings.py # 项目的设置文件
        spiders/ # 放置spider代码的目录
            __init__.py
            ...
kdl_spider.py

编写爬虫(spider):在tutorial/spiders/ 目录下新建kdl_spider.py文件

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import scrapy
class kdlspider(scrapy.spiders.spider):
    name = "kdl"
    def start_requests(self):
        url = "https://dev.kdlapi.com/testproxy"
        yield scrapy.request(url, callback=self.parse)
    def parse(self, response):
        print(response.text)
middlewares.py
  1. middlewares.py中新增proxydownloadermiddleware即代理中间件
  2. 请注意替换代码中的部分信息:
    ${username}:用户名
    ${password}:密码
    # -*- coding: utf-8 -*-
    from scrapy import signals
    from w3lib.http import basic_auth_header
    class proxydownloadermiddleware:
        def process_request(self, request, spider):
            proxy = "59.38.241.25:23916"
            request.meta['proxy'] = "http://%(proxy)s" % {'proxy': proxy}
            # 用户名密码认证(私密代理/独享代理)
            request.headers['proxy-authorization'] = basic_auth_header('${username}', '${password}')  # 白名单认证可注释此行
            return none
    
settings.py

settings.py中激活proxydownloadermiddleware代理中间件

# -*- coding: utf-8 -*-
# enable or disable downloader middlewares
# see https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
downloader_middlewares = {
    'tutorial.middlewares.proxydownloadermiddleware': 100,
}

java

httpclient

httpclient-4.5.6

使用提示

  1. 此样例同时支持访问http和https网页
  2. 使用用户名密码访问的情况下,每次请求httpclient会发送两次进行认证从而导致请求耗时增加,建议使用白名单访问
  3. 若有多个用户名、密码进行认证需要在代码中须添加authcachevalue.setauthcache(new authcacheimpl());
  4. 依赖包下载:



import java.net.url;
import org.apache.http.httphost;
import org.apache.http.auth.authscope;
import org.apache.http.auth.usernamepasswordcredentials;
import org.apache.http.client.credentialsprovider;
import org.apache.http.client.config.requestconfig;
import org.apache.http.client.methods.closeablehttpresponse;
import org.apache.http.client.methods.httpget;
import org.apache.http.impl.client.basiccredentialsprovider;
import org.apache.http.impl.client.closeablehttpclient;
import org.apache.http.impl.client.httpclients;
import org.apache.http.util.entityutils;
/**
* 使用httpclient请求代理服务器 请求http和https网页均适用
*/
public class testproxyhttpclient {
    private static string pageurl = "https://dev.kdlapi.com/testproxy"; // 要访问的目标网页
    private static string proxyip = "59.38.241.25"; // 代理服务器ip
    private static int proxyport = 23916; // 端口号
    // 用户名密码认证(私密代理/独享代理)
    private static string username = "username";
    private static string password = "password";
    public static void main(string[] args) throws exception {
        // jdk 8u111版本后,目标页面为https协议,启用proxy用户密码鉴权
        system.setproperty("jdk.http.auth.tunneling.disabledschemes", "");
        credentialsprovider credsprovider = new basiccredentialsprovider();
        credsprovider.setcredentials(new authscope(proxyip, proxyport),
                new usernamepasswordcredentials(username, password));
        closeablehttpclient httpclient = httpclients.custom().setdefaultcredentialsprovider(credsprovider).build();
        try {
            url url = new url(pageurl);
            httphost target = new httphost(url.gethost(), url.getdefaultport(), url.getprotocol());
            httphost proxy = new httphost(proxyip, proxyport);
            requestconfig config = requestconfig.custom().setproxy(proxy).build();
            httpget httpget = new httpget(url.getpath());
            httpget.setconfig(config);
            httpget.addheader("accept-encoding", "gzip"); // 使用gzip压缩传输数据让访问更快
            closeablehttpresponse response = httpclient.execute(target, httpget);
            try {
                system.out.println(response.getstatusline());
                system.out.println(entityutils.tostring(response.getentity()));
            } finally {
                response.close();
            }
        } finally {
            httpclient.close();
        }
    }
}

jsoup

使用jsoup发起请求

使用提示

  1. 此样例同时支持访问http和https网页
  2. 使用用户名密码访问的情况下,每次请求httpclient会发送两次进行认证从而导致请求耗时增加,建议使用白名单访问
  3. 若有多个用户名、密码进行认证需要在代码中须添加authcachevalue.setauthcache(new authcacheimpl());
  4. 依赖包下载:
import java.io.ioexception;
import java.net.authenticator;
import java.net.inetsocketaddress;
import java.net.passwordauthentication;
import java.net.proxy;
import org.jsoup.jsoup;
import org.jsoup.nodes.document;
public class testproxyjsoup {
    // 用户名密码认证(私密代理/独享代理)
    final static string proxyuser = "username";
    final static string proxypass = "password";
    // 代理ip、端口号
    final static string proxyhost = "59.38.241.25";
    final static integer proxyport = 23916;
    public static string geturlproxycontent(string url) {
        authenticator.setdefault(new authenticator() {
            public passwordauthentication getpasswordauthentication() {
                return new passwordauthentication(proxyuser, proxypass.tochararray());
            }
        });
        proxy proxy = new proxy(proxy.type.http, new inetsocketaddress(proxyhost, proxyport));
        try {
            // 此处自己处理异常、其他参数等
            document doc = jsoup.connect(url).followredirects(false).timeout(3000).proxy(proxy).get();
            if (doc != null) {
                system.out.println(doc.body().html());
            }
        } catch (ioexception e) {
            e.printstacktrace();
        }
        return null;
    }
    public static void main(string[] args) throws exception {
        // 目标网站
        string targeturl = "https://dev.kdlapi.com/testproxy";
        // jdk 8u111版本后,目标页面为https协议,启用proxy用户密码鉴权
        system.setproperty("jdk.http.auth.tunneling.disabledschemes", "");
        geturlproxycontent(targeturl);
    }
}    

golang

标准库

标准库

使用提示

  • http和https网页均可适用
// 请求代理服务器
// http和https网页均适用
package main
import (
    "compress/gzip"
    "fmt"
    "io"
    "io/ioutil"
    "net/http"
    "net/url"
    "os"
)
func main() {
    // 用户名密码认证(私密代理/独享代理)
    username := "username"
    password := "password"
    // 代理服务器
    proxy_raw := "59.38.241.25:23916"
    proxy_str := fmt.sprintf("http://%s:%s@%s", username, password, proxy_raw)
    proxy, err := url.parse(proxy_str)
    // 目标网页
    page_url := "http://dev.kdlapi.com/testproxy"
    //  请求目标网页
    client := &http.client{transport: &http.transport{proxy: http.proxyurl(proxy)}}
    req, _ := http.newrequest("get", page_url, nil)
    req.header.add("accept-encoding", "gzip") //使用gzip压缩传输数据让访问更快
    res, err := client.do(req)
    if err != nil {
        // 请求发生异常
        fmt.println(err.error())
    } else {
        defer res.body.close() //保证最后关闭body
        fmt.println("status code:", res.statuscode) // 获取状态码
        // 有gzip压缩时,需要解压缩读取返回内容
        if res.header.get("content-encoding") == "gzip" {
            reader, _ := gzip.newreader(res.body) // gzip解压缩
            defer reader.close()
            io.copy(os.stdout, reader)
            os.exit(0) // 正常退出
        }
        // 无gzip压缩, 读取返回内容
        body, _ := ioutil.readall(res.body)
        fmt.println(string(body))
    }
}

csharp

标准库

标准库

使用提示

  • http和https网页均可适用
using system;
using system.text;
using system.net;
using system.io;
using system.io.compression;
namespace csharp_http
{
    class program
    {
        static void main(string[] args)
        {
            // 要访问的目标网页
            string page_url = "http://dev.kdlapi.com/testproxy";
            // 构造请求
            httpwebrequest request = (httpwebrequest)webrequest.create(page_url);
            request.method = "get";
            request.headers.add("accept-encoding", "gzip");  // 使用gzip压缩传输数据让访问更快
            // 代理服务器
            string proxy_ip = "59.38.241.25";
            int proxy_port = 23916;
            // 用户名密码认证(私密代理/独享代理)
            string username = "username";
            string password = "password";
            // 设置代理 (开放代理或私密/独享代理&已添加白名单)
            // request.proxy = new webproxy(proxy_ip, proxy_port);
            // 设置代理 (私密/独享代理&未添加白名单)
            webproxy proxy = new webproxy();
            proxy.address = new uri(string.format("http://{0}:{1}", proxy_ip, proxy_port));
            proxy.credentials = new networkcredential(username, password);
            request.proxy = proxy;
            // 请求目标网页
            httpwebresponse response = (httpwebresponse)request.getresponse();
            console.writeline((int)response.statuscode);  // 获取状态码
            // 解压缩读取返回内容
            using (streamreader reader =  new streamreader(new gzipstream(response.getresponsestream(), compressionmode.decompress))) {
                console.writeline(reader.readtoend());
            }
        }
    }
}

node.js

标准库(http url)

标准库(http,https均适用)

使用提示

  • http,https均适用
const http = require("http");  // 引入内置http模块
const url  = require("url");
// 要访问的目标页面
const targeturl = "http://dev.kdlapi.com/testproxy";
const urlparsed   = url.parse(targeturl);
// 代理ip
const proxyip = "proxyip";  // 代理服务器ip
const proxyport = "proxyport"; // 代理服务器host
// 用户名密码认证(私密代理/独享代理)
const username = "username";
const password = "password";
const base64    = new buffer(username  ":"  password).tostring("base64");
const options = {
    host    : proxyip,
    port    : proxyport,
    path    : targeturl,
    method  : "get",
    headers : {
        "host"                : urlparsed.hostname,
        "proxy-authorization" : "basic "  base64
    }
};
http.request(options,  (res) => {
        console.log("got response: "  res.statuscode);
        // 输出返回内容(使用了gzip压缩)
        if (res.headers['content-encoding'] && res.headers['content-encoding'].indexof('gzip') != -1) {
            let zlib = require('zlib');
            let unzip = zlib.creategunzip();
            res.pipe(unzip).pipe(process.stdout);
        } else {
            // 输出返回内容(未使用gzip压缩)
            res.pipe(process.stdout);
        }
    })
    .on("error", (err) => {
        console.log(err);
    })
    .end()
;

标准库(http tls util)

标准库(适用http和https请求)

使用提示

  • http网页和https网页均可适用
let http = require('http'); // 引入内置http模块
let tls = require('tls'); // 引入内置tls模块
let util = require('util');
// 用户名密码认证(私密代理/独享代理)
const username = 'username';
const password = 'password';
const auth = 'basic '  new buffer(username  ':'  password).tostring('base64');
// 代理服务器ip和端口
let proxy_ip = '59.38.241.25';
let proxy_port = 23916;
// 要访问的主机和路径
let remote_host = 'https://dev.kdlapi.com/testproxy';
let remote_path = '/';
// 发起connect请求
let req = http.request({
    host: proxy_ip,
    port: proxy_port,
    method: 'connect',
    path: util.format('%s:443', remote_host),
    headers: {
        "host": remote_host,
        "user-agent": "mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/60.0.3100.0 safari/537.36",
        "proxy-authorization": auth,
        "accept-encoding": "gzip"   // 使用gzip压缩让数据传输更快
    }
});
req.on('connect', function (res, socket, head) {
    // tls握手
    let tlsconnection = tls.connect({
        host: remote_host,
        socket: socket
    }, function () {
        // 发起get请求
        tlsconnection.write(util.format('get %s http/1.1\r\nhost: %s\r\n\r\n', remote_path, remote_host));
    });
    tlsconnection.on('data', function (data) {
        // 输出响应结果(完整的响应报文串)
        console.log(data.tostring());
    });
});
req.end();

request

request

使用提示

  • 请先安装request库: npm install request
  • http网页和https网页均可适用
let request = require('request'); // 引入第三方request库
let util = require('util');
let zlib = require('zlib');
// 用户名密码认证(私密代理/独享代理)
const username = 'username';
const password = 'password';
// 要访问的目标地址
let page_url = 'https://dev.kdlapi.com/testproxy'
// 代理服务器ip和端口
let proxy_ip = '59.38.241.25';
let proxy_port = 23916;
// 完整代理服务器url
let proxy = util.format('http://%s:%s@%s:%d', username, password, proxy_ip, proxy_port);  
// 发起请求
request({
    url: page_url,
    method: 'get',
    proxy: proxy,
    headers: {
        "user-agent": "mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/60.0.3100.0 safari/537.36",
        "accept-encoding": "gzip"   // 使用gzip压缩让数据传输更快
    },
    encoding: null,  // 方便解压缩返回的数据
}, function(error, res, body) {
    if (!error && res.statuscode == 200) {
        // 输出返回内容(使用了gzip压缩)
        if (res.headers['content-encoding'] && res.headers['content-encoding'].indexof('gzip') != -1) {
            zlib.gunzip(body, function(err, dezipped) {
                console.log(dezipped.tostring()); 
            });
        } else {
            // 输出返回内容(没有使用gzip压缩)
            console.log(body);
        }
    } else {
        console.log(error);
    }
});

puppeteer

puppeteer(ip白名单)

使用提示

  • 基于用户名密码认证的http/https代理puppeteer
  • 运行环境要求: node7.6.0或以上 puppeteer
  • 请先安装puppeteer: npm i puppeteer
// 引入puppeteer模块
const puppeteer = require('puppeteer');
// 要访问的目标网页
const url = 'http://dev.kuaidaili.com/testproxy';
// 添加headers
const headers = {
    'accept-encoding': 'gzip' // 使用gzip压缩让数据传输更快
};
// 代理服务器ip和端口
let proxy_ip = '59.38.241.25'
let proxy_port = 23916
(async ()=> {
    // 新建一个浏览器实例
    const browser = await puppeteer.launch({
        headless: false,  // 是否不显示窗口, 默认为true, 设为false便于调试
        args: [
            `--proxy-server=${proxy_ip}:${proxy_port}`,
            '--no-sandbox',
            '--disable-setuid-sandbox'
        ]
    });
    // 打开一个新页面
    const page = await browser.newpage();
    // 设置headers
    await page.setextrahttpheaders(headers);
    // 访问目标网页
    await page.goto(url);
})();
puppeteer(用户名密码认证)

使用提示

  • 基于白名单的http/https代理puppeteer
  • 运行环境要求: node7.6.0或以上 puppeteer
  • 请先安装puppeteer: npm i puppeteer
// 引入puppeteer模块
const puppeteer = require('puppeteer');
// 要访问的目标网页
const url = 'http://dev.kuaidaili.com/testproxy';
// 添加headers
const headers = {
    'accept-encoding': 'gzip' // 使用gzip压缩让数据传输更快
};
// 代理服务器ip和端口
let proxy_ip = '223.198.230.41'
let proxy_port = 19732
// 用户名密码认证(私密代理/独享代理)
const username = 'username';
const password = 'password';
(async ()=> {
    // 新建一个浏览器实例
    const browser = await puppeteer.launch({
        headless: false,  // 是否不显示窗口, 默认为true, 设为false便于调试
        args: [
            `--proxy-server=${proxy_ip}:${proxy_port}`,
            '--no-sandbox',
            '--disable-setuid-sandbox'
        ]
    });
    // 打开一个新页面
    const page = await browser.newpage();
    // 设置headers
    await page.setextrahttpheaders(headers);
    // 用户民密码认证
    await page.authenticate({username: username, password: password});
    // 访问目标网页
    await page.goto(url);
})();

ruby

net/http

net/http(ip白名单)

使用提示

  • 基于ip白名单的http/https代理net/http
# -*- coding: utf-8 -*-
require 'net/http'  # 引入内置net/http模块
require 'zlib'
require 'stringio'
# 代理服务器ip 和 端口
proxy_ip = '59.38.241.25'
proxy_port = 23916
# 要访问的目标网页, 以快代理testproxy页面为例
page_url = "https://dev.kuaidaili.com/testproxy"
uri = uri(page_url)
# 新建代理实例
proxy = net::http::proxy(proxy_ip, proxy_port)
# 创建新的请求对象 
req = net::http::get.new(uri)
# 设置user-agent
req['user-agent'] = 'mozilla/5.0 (macintosh; u; intel mac os x 10_6_8; en-us) applewebkit/534.50 (khtml, like gecko) version/5.1 safari/534.50'
req['accept-encoding'] = 'gzip'  # 使用gzip压缩传输数据让访问更快
# 使用代理发起请求, 若访问的是http网页, 请将use_ssl设为false
res = proxy.start(uri.hostname, uri.port, :use_ssl => true) do |http|
    http.request(req)
end
# 输出状态码
puts "status code: #{res.code}"
# 输出响应体
if  res.code.to_i != 200 then
    puts "page content: #{res.body}"
else
    gz = zlib::gzipreader.new(stringio.new(res.body.to_s))
    puts "page content: #{gz.read}" 
end
net/http(用户名密码认证)

使用提示

  • 基于用户名密码认证的http/https代理net/http
# -*- coding: utf-8 -*-
require 'net/http'  # 引入内置net/http模块
require 'zlib'
require 'stringio'
# 代理服务器ip 和 端口
proxy_ip = '59.38.241.25'
proxy_port = :23916
# 用户名密码认证(私密代理/独享代理)
username = 'username'
password = 'password'
# 要访问的目标网页, 以快代理testproxy页面为例
page_url = "https://dev.kuaidaili.com/testproxy"
uri = uri(page_url)
# 新建代理实例
proxy = net::http::proxy(proxy_ip, proxy_port, username, password)
# 创建新的请求对象 
req = net::http::get.new(uri)
# 设置代理用户名密码认证(私密代理/独享代理)
req.basic_auth(username, password)
# 设置user-agent
req['user-agent'] = 'mozilla/5.0 (macintosh; u; intel mac os x 10_6_8; en-us) applewebkit/534.50 (khtml, like gecko) version/5.1 safari/534.50'
req['accept-encoding'] = 'gzip'  # 使用gzip压缩传输数据让访问更快
# 使用代理发起请求, 若访问的是http网页, 请将use_ssl设为false
res = proxy.start(uri.hostname, uri.port, :use_ssl => true) do |http|
    http.request(req)
end
# 输出状态码
puts "status code: #{res.code}"
# 输出响应体
if  res.code.to_i != 200 then
    puts "page content: #{res.body}"
else
    gz = zlib::gzipreader.new(stringio.new(res.body.to_s))
    puts "page content: #{gz.read}" 
end

httparty

httparty(ip白名单)

使用提示

  • 基于ip白名单认证的http/https代理httparty
require "httparty"  # 引入httparty模块
require 'zlib'
require 'stringio'
# 代理服务器ip和端口
proxy_ip = '59.38.241.25'
proxy_port = 23916
# 要访问的目标网页, 以快代理testproxy页面为例
page_url = 'https://dev.kuaidaili.com/testproxy'
# 设置headers
headers = {
    "user-agent" => "mozilla/5.0 (macintosh; u; intel mac os x 10_6_8; en-us) applewebkit/534.50 (khtml, like gecko) version/5.1 safari/534.50",
    "accept-encoding" => "gzip",
}
# 设置代理
options = {
    :headers => headers, 
    :http_proxyaddr => proxy_ip, 
    :http_proxyport => proxy_port,
}
# 发起请求
res = httparty.get(page_url, options)
# 输出状态码
puts "status code: #{res.code}"
# 输出响应体
if  res.code.to_i != 200 then
    puts "page content: #{res.body}"
else
    gz = zlib::gzipreader.new(stringio.new(res.body.to_s))
    puts "page content: #{gz.read}" 
end
httparty(用户名密码认证)

使用提示

  • 基于用户名密码认证的http/https代理httparty
require "httparty"  # 引入httparty模块
require 'zlib'
require 'stringio'
# 代理服务器ip和端口
proxy_ip = '59.38.241.25'
proxy_port = 23916
# 用户名密码认证(私密代理/独享代理)
username = 'username'
password = 'password'
# 要访问的目标网页,以快代理testproxy页面为例
page_url = 'https://dev.kuaidaili.com/testproxy'
# 设置headers
headers = {
    "user-agent" => "mozilla/5.0 (macintosh; u; intel mac os x 10_6_8; en-us) applewebkit/534.50 (khtml, like gecko) version/5.1 safari/534.50",
    "accept-encoding" => "gzip",
}
# 设置代理
options = {
    :headers => headers, 
    :http_proxyaddr => proxy_ip, 
    :http_proxyport => proxy_port, 
    :http_proxyuser => username, 
    :http_proxypass => password,
}
# 发起请求
res = httparty.get(page_url, options)
# 输出状态码
puts "status code: #{res.code}"
# 输出响应体
if  res.code.to_i != 200 then
    puts "page content: #{res.body}"
else
    gz = zlib::gzipreader.new(stringio.new(res.body.to_s))
    puts "page content: #{gz.read}" 
end

php

curl

curl

使用提示

  1. 此样例同时支持访问http和https网页
  2. curl不是php原生库,需要安装才能使用:
    ubuntu/debian系统:apt-get install php5-curl
    centos系统:yum install php-curl

//要访问的目标页面
$page_url = "http://dev.kdlapi.com/testproxy";
$ch = curl_init();
$proxy_ip = "59.38.241.25";
$proxy_port = "23916";
$proxy = $proxy_ip.":".$proxy_port;
// 用户名密码认证(私密代理/独享代理)
$username   = "username";
$password   = "password";
//$ch = curl_init();
curl_setopt($ch, curlopt_url, $page_url);
//发送post请求
$requestdata["post"] = "send post request";
curl_setopt($ch, curlopt_postfields, http_build_query($requestdata));
curl_setopt($ch, curlopt_ssl_verifypeer, false);  
curl_setopt($ch, curlopt_ssl_verifyhost, false);
//设置代理
curl_setopt($ch, curlopt_proxytype, curlproxy_http);
curl_setopt($ch, curlopt_proxy, $proxy);
//设置代理用户名密码
curl_setopt($ch, curlopt_proxyauth, curlauth_basic);
curl_setopt($ch, curlopt_proxyuserpwd, "{$username}:{$password}");
//自定义header
$headers = array();
$headers["user-agent"] = 'user-agent: mozilla/5.0 (compatible; msie 9.0; windows nt 6.1; trident/5.0);';
curl_setopt($ch, curlopt_httpheader, $headers);
//自定义cookie
curl_setopt($ch, curlopt_cookie,''); 
curl_setopt($ch, curlopt_encoding, 'gzip'); //使用gzip压缩传输数据让访问更快
curl_setopt($ch, curlopt_connecttimeout, 5);
curl_setopt($ch, curlopt_timeout, 10);
curl_setopt($ch, curlopt_header, true);
curl_setopt($ch, curlopt_returntransfer, true);
$result = curl_exec($ch);
$info = curl_getinfo($ch);
curl_close($ch);
echo $result;
echo "\n\nfetch ".$info['url']."\ntimeuse: ".$info['total_time']."s\n\n";
?>

易语言

易语言使用代理

使用提示

  1. 需要用到两个模块:精易模块和鱼刺类
联系尊龙人生就是搏
  • 售前在线咨询
    咨询享免费测试,专业工程师提供尊龙现金一下的技术支持
  • 客服热线
     周一至周六 9:00-18:00
  • 提交工单
    您的每一次反馈我们都认真对待
  • 客服微信 & qq
    客服微信:,客服qq:
  • 企业客户通道
    提交需求后专业的售前工程师为您提供服务