Bloomberg Portfolio Analytics (PORT) empowers the biggest players in the financial world to manage their portfolios, assess exposures, and make decisions that move the markets. Our flagship product on the Bloomberg Professional service is a mission-critical tool used daily by money managers, mutual funds, hedge funds, and pension funds around the world. PORT provides industry-leading quantitative financial tools and overnight batch report generation, enabling investment professionals to understand factors impacting the returns of their portfolio over time, monitor intraday market movements in real-time, estimate potential losses under extreme market conditions via stochastic risk analysis, and generate new trading ideas.
Our highly scalable system processes billions of data points and complex calculations each day. The enterprise use and scale of our product impose technical challenges around stringent latency, availability, and throughput metrics. We are experiencing tremendous growth of our product and user base, and we are constantly looking for innovate on top of our existing software and technologies.
As the newest expansion of Bloomberg”s globally-distributed PORT Engineering department, PORT”s San Francisco-based Data Transparency team applies data science-oriented tooling, such as Jupyter notebooks and Apache Spark, to enable and improve data transparency and quality across the PORT application stack. Our products help other PORT developers and SREs understand and validate the billions of data points that are consumed by our systems daily, ultimately improving the quality of that data for our end users.
-Unique, compelling datasets curated by Bloomberg over decades of partnerships with the world”s leading financial institutions
-Tens of terabytes of constantly growing data across various stores at the core of our platform that delivers hundreds of thousands of reports daily
-Supportive colleagues, many with significant involvement in the open source projects we leverage in our products
-An inclusive office community, offering frequent technical training, professional development opportunities, and various employee communities
We”ll trust you to:
-Apply your practical experience with large data sets to the challenges of our dynamic environment
-Own your contributions from development to deployment and beyond
-Collaborate with teammates locally and around the globe
You need to have:
-4+ years professional programming experience with Python, Java, Scala, or comparable languages
-Experience with large, scalable distributed systems
-Strong knowledge of data structures and understanding of algorithms
-Pragmatic problem solving skills
We”d love to see:
-Industry experience with Apache Spark, including Spark SQL
-Familiarity with Jupyter notebooks and associated infrastructure
-Experience with Kubernetes
-Highly skilled in both object-oriented and functional programming If this sounds like you, apply!