spider--某站搜索--自动化dp

发布于:2024-12-07 ⋅ 阅读:(26) ⋅ 点赞:(0)

免责声明:本文仅作分享!

自动化: DrissionPage

DrissionPage官网

import time
from DrissionPage import ChromiumPage,ChromiumOptions
import pandas as pd

# 这里配置了浏览器路径,不配置的话直接 page = ChromiumPage()
co = ChromiumOptions()
co.set_browser_path("D:\chrome-win64\chrome-win64\chrome.exe")
page = ChromiumPage(co)


name = str(input("输入查询内容:"))
page.get(f'https://search.bilibili.com/all?&keyword={name}')
df = pd.DataFrame()
data_list = []

while True:
    for i in range(3):
        page.scroll.down(500)
        print("*"*30)
    link_a = page.eles("x://div[@class='bili-video-card__wrap __scale-wrap']/a/@href") # 链接
    # name_a = page.eles('.:bili-video-card__info--author') # 用户
    time_a = page.eles('.:bili-video-card__info--date')   # 发布时间
    name_n = page.eles('.:bili-video-card__info--owner')  # 用户主页
    namesa = page.eles('.:bili-video-card__stats')        # 播放量 时间

    for lj,fb,zy,bf in zip(link_a,time_a,name_n,namesa):
        a = lj.link.replace('\n','')
        # b = yh.text.replace('\n','')
        c = fb.text.replace('\n','').replace('·','')
        d = zy.text.replace('\n','')
        e = bf.text.replace('\n','')
        print(a,c,d,e)
        row_data = {
            '链接': a if a else None,
            # '用户': b if b else None,
            '发布时间': c if c else None,
            '用户': d if d else None,
            '播放量/视频时间': e if e else None,}
        data_list.append(row_data)
        df = pd.concat([df, pd.DataFrame(data_list)], ignore_index=True)
        data_list = []
    df.to_excel(name + '---b站.xlsx', index=False)

    time.sleep(1)
    try:
        page.ele('@text()=下一页').click()
    except Exception:
        print("获取完毕·")
        break


 


网站公告

今日签到

点亮在社区的每一天
去签到