Tuesday, May 1, 2012

Snoop internal network data without breaking in, Info is already breaking out.

One day when I was creating a pastie for some DevOps related discussion, and filtering out the organization related data..... it just occurred to what all internal information just gets added with the long logs getting pasted online for help.

someone pasted this on 20-Mar-2012 at pastebin.com
says nothing much except probably 'assanka.com' uses Puppet with PuppetMaster at puppetmaster.virtual.office.assanka.com with 192.168.30.147 as internal IP.

There are loads of paste-ies like it adding to recon for easy latched rooms behind the huge lock web entry gates.

Now, like this pastebin-scrap says hints being generated at some internal machine of Qualigaz's network
so some information about internal network of Qualigaz floating wild in open
[+] Internal IPs in range of 192.168.30.x
[+] is a XEN Virtual Machine
[+] with SELinux Not Enforced
[+] running Debian GNU/Linux 5.0.2 (lenny)
[+] sshrsakey=> AAAAB3NzaC1yc2E.......==
[+] sshdsakey=> AAAAB3NzaC1kc3M.......==

could have a look at http://pastebin.com/haiqVHCN, http://pastebin.com/iFMsYiwC for some funny more out-bursting data.

This was just from very few google search-ed pastebin.com results. Think what a full blown pastebin scrapper would do.

To be safe from such accidents, try to use service like ZeroBin {with 256 bits AES encrypted pastie at server}.

Saturday, March 31, 2012

facebook blocks spam URLs, but there method looks useless

Facebook has a user-security service checking for the spam/malicious nature of URLs posted by its users and blocking those if they belong to Facebook's blacklisted list.
More about it at http://blog.facebook.com/blog.php?post=403200567130

Some important text from the link above:
These automated systems don't just prevent spam and other annoyances. They also protect against dangerous websites that damage your computer or try to steal your information. ..........
Sometimes, spammers try to hide their malicious links behind URL shorteners like Tiny URL or bit.ly, and in rare cases, we may temporarily block all use of a specific shortener. If you hit a block while using a URL shortener, try a different one or just use the original URL for whatever you're trying to share.
These systems are so effective .......... 
In my very recent post on Facebook, I was just trying to post the very awesome Google search link displaying the 3D Graph as a heart
https://www.google.co.in/search?ix=seb&sourceid=chrome&ie=UTF-8&q=sqrt(cos(3*x))*cos(100*y)%2B1.5*sqrt(abs(x))+%2B+0.8+x+is+from+-1+to+1%2C+y+is+from+-1+to+1%2C+z+is+from+0.01+to+2.5
and facebook denied accepting my link saying it belongs to the 'spammy link' section of link url https://www.google.co.in/search.
so actually,
initially just without thinking it from security perspective I converted it to a goo.gl short url
http://goo.gl/Xwhff
and tried that up, and yeah..... it works (that's why I'm writing about it, obviously).


So, how it works 
the way I could think it works is plainly by matching the URL (except for the GET parameter passed on to it) from the blacklist of the URLs that Facebook maintains for it.



The Problem
to bypass such a system is real real easy... just get a link redirected from any in the batch of URL Shorteners, Page Translaters, Proxy or..... Simple get up a new machine on cloud and get it to bounce the URL back to desired URL.

Even if FB's awesome team succeeds in blacklisting in ever growing services of proxy and url-shorteners.

This technique of theirs wouldn't be able to catch your newly specially launched service before you a some decent response time.



What I think, would solve it
An intelligent security facilitator like Facebook would keep send that blacklist list on client side for several reasons.
So they must be checking the URL post request at their FB-Servers and then responding back with any concerns related to it.

In such a scenario WHY don't they simple get the URL's crawled back to the last URL responding without any HTTP referrer.
Say, the same method I use in webhoudini.appspot.com to fetch the final URL from a short-ened or redirected URL for people requiring validation of a suspicious link.




This way they will never have to blacklist the URL Shortener services or any other valid URL bases for that matter, just to avoid their chance of redirection to malicious links.