Welcome, Guest
You have to register before you can post on our site.

Username/Email:
  

Password
  





Search Forums

(Advanced Search)

Forum Statistics
» Members: 129
» Latest member: mannicked
» Forum threads: 136
» Forum posts: 211

Full Statistics

Online Users
There are currently 22 online users.
» 0 Member(s) | 21 Guest(s)
Yandex

Latest Threads
Buy Ketamine Online | Buy...
Forum: General
Last Post: mannicked
06-26-2022, 07:58 PM
» Replies: 0
» Views: 19
Buy Jwh-018, Am-2201, AKB...
Forum: Bodybuilding
Last Post: mannicked
06-26-2022, 07:50 PM
» Replies: 0
» Views: 11
Buy Ketamine Online | Buy...
Forum: General
Last Post: mannicked
06-26-2022, 07:48 PM
» Replies: 0
» Views: 9
Buy Jwh-018, Am-2201, AKB...
Forum: Lifting
Last Post: mannicked
06-26-2022, 07:47 PM
» Replies: 0
» Views: 10
Buy Ketamine Online | Buy...
Forum: General
Last Post: mannicked
06-26-2022, 07:46 PM
» Replies: 0
» Views: 13
Buy Ketamine Online | Buy...
Forum: Conspiracy
Last Post: mannicked
06-26-2022, 07:42 PM
» Replies: 0
» Views: 8
Buy Jwh-018, Am-2201, AKB...
Forum: Politics
Last Post: mannicked
06-26-2022, 07:41 PM
» Replies: 0
» Views: 13
Buy Ketamine Online | Buy...
Forum: Memes
Last Post: mannicked
06-26-2022, 07:39 PM
» Replies: 0
» Views: 8
Buy Ketamine Online | Buy...
Forum: General
Last Post: mannicked
06-26-2022, 07:37 PM
» Replies: 0
» Views: 31
Buy Jwh-018, Am-2201, AKB...
Forum: General
Last Post: mannicked
06-26-2022, 07:31 PM
» Replies: 0
» Views: 9

  Prediction: WW3 starting in 7 days
Posted by: chakal - 05-02-2022, 03:13 PM - Forum: WW3 - Replies (2)

May 9th aka victory day for the russians will probably be when ww3 starts  Cry victory day

remember to not go out in the fallout "snow"

Print this item

Wink Profile
Posted by: Nigger - 05-02-2022, 03:09 PM - Forum: Suggestions - Replies (2)

give us profile banners and gif pfps

yours sincerely, Nigger.

Cool



Attached Files Thumbnail(s)
   
Print this item

  Let's do a little trolling
Posted by: KrautByte - 05-02-2022, 02:52 PM - Forum: Trolling - Replies (1)

https://encyclopediadramatica.online/Portal:Trolls - very good trolling wiki
https://www.youtube.com/watch?v=vcAHbvTlpKA - must watch

Print this item

  List of working private servers
Posted by: chakal - 05-02-2022, 02:35 PM - Forum: AQW Servers - Replies (4)

the last survivors of shitqw private servers (they're pretty ded but at least 5 players or so)

https://www.aqwworld.com/2017/10/aqw-pri...erver.html

Print this item

Lightbulb Twitter videos to mp4
Posted by: eso - 05-02-2022, 02:06 PM - Forum: Flex - Replies (1)

Insert a list of twitter video links in the "links" variable and specify where you want it to be dumped.

Code:
import requests
import json
import urllib.request
import os

path = r"path\to\directory"

links = """
twitter-video-link-1
twitter-video-link-2
"""

links = links.split("\n")
fails = []
for link in links:
    if link == "":
        continue
    twt_id = link.split("/")[5].split("?", 1)[0]
    if os.path.exists(r'{}\{}.mp4'.format(path, twt_id)):
        continue
    r = requests.get("https://tweetpik.com/api/tweets/{}/video".format(twt_id))
    data = json.loads(r.text)
    print("Downloading", twt_id)
    try:
        variants = data["variants"]
        bitrate = 0
        for variant in variants:
            if variant["bitrate"] > bitrate:
                bitrate = variant["bitrate"]
                dwld_link = variant["url"]

        urllib.request.urlretrieve(dwld_link, r'{}\{}.mp4'.format(path, twt_id))
    except:
        print("Failed to download", twt_id)
        fails += [link]

for fail in fails:
    print(fail)

Print this item

Video Dumping my meme folder
Posted by: Notaboredguy - 05-02-2022, 01:57 PM - Forum: Memes - Replies (3)

I can't cause dumb admin has wrong settings

Print this item

  Spotify playlist to mp3 (youtube-result)
Posted by: eso - 05-02-2022, 01:57 PM - Forum: Flex - Replies (1)

Converts spotify songs in a given playlist to mp3 files, resulting from youtube search results.

Code:
from bs4 import BeautifulSoup
import pandas as pd
import requests
from time import sleep
from datetime import date, timedelta
import json
import sys
import os
import subprocess
import pytube
from youtubesearchpython import VideosSearch
from moviepy.editor import *
import threading

# Info
client_id = ""
client_secret = ""

auth_url = "https://accounts.spotify.com/api/token"

# POST
auth_response = requests.post(auth_url, {
    'grant_type'    : 'client_credentials',
    'client_id'     : "[insert-client_id]",
    'client_secret' : "[insert-client_secret]"
    })

#map site

url = "[insert-spotify-playlist-link]"

# convert the response to JSON
auth_response_data = auth_response.json()

# save the access token
access_token = auth_response_data['access_token']

# access all endpoints
headers = {
    'Authorization': 'Bearer {token}'.format(token=access_token)
    }

# base URL all Spotify API endpoints
base_url = 'https://api.spotify.com/v1/'

#create empty arrays for data we're collecting
dates=[]
url_list=[]
final = []


#add_url()
url_list=[url]


def song_scrape(tracks, songs=[]):
    global headers
    for track in tracks["items"]:
        artists = [i["name"] for i in track["track"]["artists"]]
        if "Various Artists" in artists:
            artists.remove("Various Artists")
            #print(artists)
            #print(track)
        songs += [
            {
            "date":     track["added_at"],
            "url":      track["track"]["external_urls"]["spotify"],
            "album_url":track["track"]["album"]["external_urls"]["spotify"],
            "name":     track["track"]["name"],
            "album":    track["track"]["album"]["name"],
            "release":  track["track"]["album"]["release_date"],
            "image":    track["track"]["album"]["images"][0]["url"],
            "artists":  artists
            }
            ]
    if tracks["total"] - tracks["offset"] > 100:
        url = tracks["next"]
        #print(url, tracks["total"] - tracks["offset"])
        r = requests.get(url, headers=headers)
        sleep(2)
        source = json.loads(r.text)
        return song_scrape(source, songs)
    return songs
    
#loop through urls to create array of all of our song info
all_songs = []
for u in url_list:
    read_pg= requests.get(u)
    sleep(1)
    soup= BeautifulSoup(read_pg.text, "html.parser")
    songs= soup.find(id="initial-state")
    print(songs)
    songs = json.loads(songs.string)
    keys = songs["entities"]["items"].keys()
    for key1 in keys:
        if "playlist" in key1:
            key = key1
            break
    tracks = songs["entities"]["items"][key]["tracks"]
    all_songs += song_scrape(tracks)

def time_s(duration):
    duration = duration.split(":")
    total = 0
    for i in range(1, len(duration)+1):
        total += int(duration[-i])*60**(i-1)
    #print(duration, total)
    return total

def srch_result(artist, title, parent_dir=r"music_files/"):
    file_name = filename("{}-{}".format(artist, title))+".mp4"
   
    if os.path.isfile(parent_dir+file_name[:-3]+"mp3"):
        global all_songs
       
        #print(parent_dir+file_name[:-3]+"mp3")
        return 0,0
   
    info_scraped = dict()
    videosSearch = VideosSearch('{} {} audio'.format(artist, title), limit = 4)
    info = videosSearch.result()
    for result in info["result"]:
        view_count = result["viewCount"]["text"].split(" ")[0].replace(",","")

        if view_count.isnumeric():
            view_count = int(view_count)
        else:
            view_count = 0
        time = time_s(result["duration"])
        if time > 1200:
            continue
        else:
            info_scraped[result["id"]] = {
                "title"     : result["title"],
                "views"     : view_count,
                "channel"   : result["channel"]["name"],
                "link"      : result["link"],
                "duration"  : time
                }
    top_search = [0]
    for info in info_scraped:
        if top_search[0] < info_scraped[info]["views"]:
            top_search = [info_scraped[info]["views"], info]
    if top_search == [0]:
        return 0,0
    top_search = top_search[1]
    return info_scraped[top_search], file_name

def mp4_to_mp3(mp4):
    mp3 = mp4[:-3]+"mp3"
    mp4_without_frames = AudioFileClip(mp4)
    mp4_without_frames.write_audiofile(mp3)
    mp4_without_frames.close()
    os.remove(mp4)
    return

def ffmpeg_conv(mp4, duration):
    path_ffmpeg = os.getcwd()+"/ffmpeg"
    if not os.path.isfile(path_ffmpeg):
        print("Error: Missing ffmpeg")
        mp4_to_mp3(mp4)
        return
    mp3 = mp4[:-3]+"mp3"
    #cmd = "{} -ss 0 -i {} -t {} -c:v libx264 -c:a copy -preset ultrafast -crf 0 {}".format(path_ffmpeg, mp4[:-4]+"2.mp4", int(duration/2)+1, mp4)
    #os.system(cmd)

    #error comes from the duration being estimated from the bitrate and that that bitrate is not set correctly automatically
   
    cmd ="{} -i {} -b:a 192k -ar 48000 {}".format(path_ffmpeg, mp4, mp3)
    #cmd = "{} -i {} -vn {}".format(path_ffmpeg, mp4, mp3)
    os.system(cmd)
    while not os.path.isfile(mp3):
        sleep(1)

    if os.path.isfile(mp3):
        os.remove(mp4)
    return
   
def filename(name):
    illegal = "#%&{}\\<>*?/$!\":@ |"
    illegal_start = "- ._"
    name = list(name)
    if name[0] in illegal_start:
        name = name[1:]
    if len(name) > 254:
        name = name[:254]
    for i in range(len(name)):
        if name[i] in illegal:
            name[i] = "_"
    name = "".join(name).replace("(","[").replace(")","]")
    name = name.replace("'", "")
    return name

def download(top_search, file_name,parent_dir = r"music_files/"):
    if top_search==0 or top_search==dir():
        return
    yt = pytube.YouTube(top_search["link"])

    vids = yt.streams.filter(only_audio=True, file_extension="mp4")[-1].download(parent_dir, file_name)

    #Converts mp4 to mp3
    ffmpeg_conv(parent_dir+file_name, top_search["duration"])
   
def main():
    global all_songs
    i = 0
    while len(all_songs):
        try:
            song = all_songs[i]
            yt_info, file_name = srch_result(song["artists"][0], song["name"])
            if song in all_songs:
                all_songs.remove(song)
            download(yt_info, file_name)
            print(len(all_songs))
        except Exception as e:
            i += 1
        finally:
            if i > 50 and len(all_songs)>50:
                i=0
            elif i>=len(all_songs):
                i = 0
           

for i in range(30):
    t=threading.Thread(target=main)
    t.start()

Print this item

Rainbow Triggered #standwithukraine tards
Posted by: chakal - 05-02-2022, 01:49 PM - Forum: Trolling - Replies (3)

[Image: unknown.png]

Print this item

  Cool, keep me posted
Posted by: KrautByte - 05-02-2022, 01:43 PM - Forum: Memes - No Replies

   
Aaaah sweet times back in 2016

Print this item

  Generating all permutations of an array of m different elements.
Posted by: eso - 05-02-2022, 01:39 PM - Forum: Flex - Replies (1)

Code:
def allPerms(m):
    ordSet = [i for i in range(m)]
    perm, perms = [], []
    def genPerms(ordSet, m=m):
        nonlocal perm, perms
        for i in ordSet:
            perm += [i]
            ordSet2 = ordSet.copy()
            ordSet2.remove(i)
            genPerms(ordSet2)
        if len(perm) == m:
            perms += [perm]
        perm = perm[:-1]
    genPerms(ordSet)
    return perms

Print this item