Welcome to BeerMoneyForum.com - BIGGEST MAKE MONEY FORUM ONLINE

Join us now to get access to all our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, and so, so much more. It's also quick and totally free, so what are you waiting for?
Blue
Red
Green
Orange
Voilet
Slate
Dark
  Guest viewing is limited

Follow Along My Journey - Target $100+ / month (for now)

About me:
I am a farmer by trade but I do enjoy programming as a hobby. I wish to turn that hobby into a second income.

Ideas:
Create a traffic exchange site
I have a lot of traffic exchange credits , so creating one of these sites and advertising it to members on traffic exchanges
already might yield good results. They are already interested in traffic exchanges so they may see a new site and jump on it. I think the only winners in terms of money from traffic exchanges are the owners of said exchanges.

Create a auto surf traffic exchange
As above though i'm unsure if I can really use such traffic for anything useful. Is it to late now for auto surfs?
Maybe sticking a mining pool on the auto surf(opt in) could increase earnings.

Proxy Website
This will be a list of proxys scraped from the web. However I have come up with a method to get thousands of proxy servers passively. I can also automatically detect if they are working anonymity level ect. See my next post for info on this , it is work in progress right now.


De Hash website
This will be a website that allows you to lookup md5/sha1 hashes ect but before showing you the hash we show a url shortner link.
Can also charge for bulk decodes. The downside to this is the database size. Just to bruteforce all passwords 0-6 letters , alphanumeric
and insert them to the database would take about 3-4 months per hash. At this point the database size would be 423Gb in size without
any compression. This is not even including charecters from russia , china ect. I do not think this is viable.
But its just a idea for now maybe a network of computers generating the database could decrease the time here
and the database could be split by starting letter of the dehashed password.

Second Life
I have a few ideas for scripts for this game but I feel like its probably a bit to late. From what I can tell the market is flooded. A case of more supply than demand.

Passive Methods:
I know a lot of people here use a vps , that is fine but you are looking at about 5$ a month minimum my passive methods run on
a Raspberry Pi 24/7 , The only cost is electricity cost around $5 per year. https://bitsurf.club/ - Auto surf website that also pays 1 sat per link (60 seconds). I can earn about 1000 sat here a day. (with time for page loads). $0.50 Per week (estimated) $26 / Year



Total Passive Earnings - Electricity:
$21 per year


Currently working on:
Proxy Website

Target $100+ / month (for now)
 
The proxy website idea.
The proxy website is a script that runs on my raspberry pi , it communicates with a webserver I have online using a backend api.
It works like so:

1.Site spiders a bunch of urls for proxy servers.
2.Site test all proxy servers it finds to make sure they work.
3.If no servers working are found the link is classed as dead.
4.If servers are working we save the url and use it in future.
5.Now we google the working proxy server addresses.
6.This then gives us new links containing only working proxy servers.
7.We monitor the url making sure we are getting new working proxy servers from it constantly.

What this ultimatly means is that we have a ever growing list of proxy servers and websites that provide them. These can
be anything from pastebins (will usually only work once before being removed due to not giving us new proxys) or websites ,
blogs even forum post.

This post is ulimatly a dev log of my code:

The first thing I needed to do was to setup a list of urls in my mysql database with params
id - Unique id number for the website
url - The url to the website to leech proxys from
method - The extraction method , for now this is just default to 0 wich is a simple regex search
timeadded - The timestamp the url was added to the database
active - Active or not , rather than remove urls when we are done with them im setting them to none active when not being used.
This is done to create a blacklist of urls so we are not constantly adding back previously deleted urls.
Last checked - This is the last timestamp that the url was leeched from. We will use this to make sure new proxys are coming.


Url is set to Unique in the database so its impossible to add the same url twice. I then started to fill the database with urls we can
leech from. I can also run new extraction methods on old urls to see if that yields any results.






Next I prepopulated this with a single url:
https://raw.githubusercontent.com/clarketm/proxy-list/master/proxy-list-raw.txt
This is a github post thats automatically updated daily apparently although the list is 2 days old at this point. But it will
serve as a great test-bed to see if our proxy gathering theory works. From this single url we should be able to generate tens
if not hundreds of other urls to leech from.

Now it was time for me to jump into python and create some code.

First I made a Mysql database wrapper , this is just a few functions for making communication with the database nice and easy.
Then I ran a simple "SELECT * FROM urls WHERE active=1" command in python. This gave me my single url in a list (it would of gave
me all working urls , had there been more). I then launched a HttpWrapper (A custom requests class to make http get/post easier).
Then I ran a simple regex command to extract all ip addresses like so:





Ok so that all works. Now I need to make a table to insert said ip addresses into the database. I also added my code to its own
class class_leech.py just to clean things a bit.

Proxy table structure
id - unique id for each proxy address
ipaddress - Ip address of the proxy
port - port of the proxy
added_time - time the proxy was added, if a proxy does not work in 12 hours , it gets set to not working
working - Is the proxy working (see above)
lastchecked_time - Last time this server was checked to see if its working , I want to check any not been checked / the oldest checked first.

More will be added such as proxy type (https , http , socks ect) later on. But this is fine for now. All that stuff will be handled
by new classes anyway.


After running this new code I got 300 proxys added to my database:





Next I want to gather information on the extracted proxy servers. For this I need to try each one as a http proxy. A https proxy a socks4 proxy
and a socks5 proxy. I will test these proxys against :
http - http://www.neverssl.com
https - https://whynohttps.com
socks4/5 - https://whynohttps.com

I will not only attempt to grab these websites but will also check the contents of the reply for a string I know to be in the html
to make sure our request worked. The working flag will only be set to 1 if the server works on any of these test. If after 12 hours the test
repeatidly fail the server will be removed. I will add a ping system later that will ping each server and if its offline also remove it instantly.
Pings will also serve as a measure of response time.

I set time outs on this to 15 seconds for now. This may need to increase later but its making scans so long its unbearable while
debugging. Anything longer than 15 seconds is to me unusable anyway. Because of the reliablilty of proxys we cant be sure that
a http proxy that worked , is not also a https proxy but timed out the second time we used it. For this reason I will implement
a second scan for http proxy to recheck if they work on https also every 3 hours or so.

As we can see below my code is working for http and https!






Now I need extra code to check if the servers work on socks4/5. I am unsure how to identify between the two right now. I know
that socks 5 sends a reply back when we connect to the socket as a greating. So I have two schools of thought:

1. I connect over socket and look at the replys.
2. I set all proxys as socks5 in the python request module anb hope it errors out saying something like "this is a socks 4 proxy".

I think however method 1 will be faster because I can just detect socket rejections and and log them also :).

This is what I will work on today. I am posting this now as I will soon it the post character limit. More to come....
 
Continuing on from where I left off everything was so slow to complete scans. I wrote some socks test code but I found it also
taking a long time. So I decided next to work on the ping code. This will tell me if a host is online or not. If not we can set
it as offline in the database and never scan it again. Even if it comes back online I dont want proxys that are not online 24/7.

I added a new parameter to the proxy_servers database table "pingtime" then created a simple function to ping each server. If ping
fails I set it as not working. I am also logging response times. I did some test here and found that a server can timeout for pings
but will also come back online sometimes if we check it again in future. Either way I decided to not recheck such servers as they
are low quality. I want to have a better class of standards than most sites. I also decided not to bother with socks 4 for now as
they dont work with udp or https. I may implement them in future however.


I actually learnt alot from this a socks5 server connects to the port / ip using a socket then sends:

b'\x05\x01\x00')

This is a handshake request , the reply is


b'\x05\x00')

05 means socks5 and 00 means no authetication is needed to use this server , pretty cool stuff. Anyway onwards:

I now have the following done:
ping/request speed
identifying socks/http/https
leeching from urls

So I now need a way to search for new proxys , this is pretty simple if I take a random working proxy from my database, lets say:

47.52.3.36:3128

and then search for this ip on bing (I use bing because google gets https://www.beermoneyforum.com/ and starts showing captchas if you search to much for ip
addresses)





Every result contains our proxy and also other proxy servers too! Using this we can pull proxy url sources from bing.
So my first job is to make a script that searching bing and returns a list of urls from the reply data. For this I also added
a column to the mysql database:

been_searched

This will just keep track if we have searched bing for a proxy or not yet.






As you know we started with a gingle leech url that was hard coded. Now we have new links coming in I deleted that. We now
have 0 urls in the database. So I let the tool loop all working proxys for new urls the result:





442 potential leechable urls. Now we only currently have 1 leech method so I am not expecting many results , but lets run our
leech function now... For reference before running we have 900 proxys.

Ok ... The scans not even finished yet and I am at 1,533,637 proxys! Not good really because alot of these will be bad/not working.
So based on this I am going to do a few things:

1.Stick a limit on the urls we scrape. For now I will set it to 100 urls for future.
2.Set urls with 0 proxys to dead I was going to add new proxy pulling methods. But we have so many proxys right now there is no need.
3.Add multithreads to the checker. This will allow us to scan 50 proxys at once rather than 1 at a time.
4.Add multithreads to the pinger.
5.If we dont get any good/working proxys from a site we will remove that site completly.

This will take a long time to complete maybe a few days constant running to sort through the crap. However during this time I can
start work on the website code!
 
Good luck with ur journey , I hope u get ur target soon , bro
but i can earn on freelancer sites with ur programming skills
u can get passive income more than 100% for sure so why u didnt try it ?
 
Good luck with ur journey , I hope u get ur target soon , bro
but i can earn on freelancer sites with ur programming skills
u can get passive income more than 100% for sure so why u didnt try it ?

I work 7 days a week from march till september and about 13 hours a day :(. But I have the rest of the year off after that! So this is just a second income during my spare time :D.
 
I work 7 days a week from march till september and about 13 hours a day :(. But I have the rest of the year off after that! So this is just a second income during my spare time :D.

good luck i hope u reach ur goal , u are hardworker man :) and soon u gonna get to it
i hope i reach my goal too :D
 
After a lot of debugging my proxy script is now running 24/7. It is far from finished but it needs time to run to self improve.
All functions where updated and put into multi threads to run constantly. I had to debug alot because I run manjaro linux but my server runs
centos. Some things where not compatible with the trasfer of o/s and needed fixing. The proxy server will now run and collect data
all week. In that time I am working on a new project.....

Youtube Project:

This project is simple we will be making a automated youtube channel that uploads 1-2 videos a day without us doing anything.
The content will be content from reddit I have seen other channels like this before like so:




The plan:
Use python to create the video
Make frames with opencv/pillow
Use text to speech for reading
Use reddit api to pull threads/comments


Lets see how this goes :D The great thing about this project is that I can show you my programming work by outputting a video.


Version 1 - Alpha
----------
This version will just pull the data from a askreddit thread that I specify and do nothing fancy but it will create a video
from the output.

Test thread:



+Added reddit api wrapper.
+Add title text wrap
+Added text to speech
+Added grab comments




Here is a output of v0.1 this is very early it took me about 2 hours to create.



If you watch this for a while you will notice audiobugs. Not only on that actual pronounciation of words but it keeps skipping. This is not present when I use text to speech and seems to be a bug relating to the python modules I am using. I plan to fix this today and polish some things up. But this gives you a idea of the prototype at least.



















---------------------------------------------------------------------------
proxy_info_handler.identify_proxy_type() - Identifys if proxys are socks 5 or http/https



cleanupmanager.database_clean() - New function to handle cleanup of the database.
+Set urls that we have leeched that have 0 proxys in the database , have been leeched from before and are at least 6 hours old
to none active so others can be leeched in there place.
+Set urls that return no new proxys in the last 48 hours to none active these could be pastebin links ect.
 
Ignore the text at the bottom of my last post it is simply my to do list (I write these threads up in notepad before posting here as I work).
 
I made some progress on my youtube / reddit bot today.

This was the output this morning:

This is the output now:


It still needs some polish, The size is off for e.g I may inflate the text size to make things more readable or just experiment with the image size and right now I am actually manually uploading the output myself to youtube. This will be done automatically when im happy with everything. But I like how this is going. At this rate I could make easily 1000 youtube videos a day. These could then be scheduled for release 2 videos a day or whatever. Literally taking not time out of my day.

Proxy script has been running solid now for two days. I plan on adding some cleanup functions to it over the next week and polishing this youtube script :).
 
This is a very nice journey to see here. Do you have any updates on your success to the moment?
 
When i look around and see the journeys people are taking, it made me realize that i still have a very long way to and and so much to learn. I am really glad i decided to check out this section of beermoneyforum.
 

📢 Recommended Partners

MGID - Native Performance & Programmatic Advertising Platform MGID Team
0.00 star(s) 0 ratings
Updated
Roobet.com | Crypto’s Fastest Growing Casino 🦘 Roobet.com
0.00 star(s) 0 ratings
Updated
Duckdice.io - Top Crypto Gambling - Bitcoin Dice DuckDice.io
0.00 star(s) 0 ratings
Updated
BMFAds.com - Advertise and Monetize Your CPC, CPM, POP Traffic BMF Staff
4.00 star(s) 4 ratings
Updated

banner

REWARDS: Active Raffles


  • 🤑 Roll 3: Win 100,000 BMF Points!

    The entry period for this raffle ends in..
Back
Top Bottom

Earnings Disclaimer:  All the posts published herein are merely based on individual views, and they do not expressly or by implications represent those of BeerMoneyForum.com or its owner. It is hereby made clear that BeerMoneyForum.com does not endorse, support, adopt or vouch any views, programs and/or business opportunities posted herein. BeerMoneyForum.com also does not give and/or offer any investment advice to any members and/or it's readers. All members and readers are advised to independently consult their own consultants, lawyers and/or families before making any investment and/or business decisions. This forum is merely a place for general discussions. It is hereby agreed by all members and/or readers that BeerMoneyForum.com is in no way responsible and/or liable for any damages and/or losses suffered by anyone of you.